Test Report: Docker_Linux_containerd 21975

                    
                      bf5d9cb38ae1a2b3e4a9e22e363e3b0c86085c7c:2025-11-24:42481
                    
                

Test fail (15/332)

x
+
TestDockerEnvContainerd (42.64s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-637175 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-637175 --driver=docker  --container-runtime=containerd: (24.192691606s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-637175"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXZPfAlg/agent.33078" SSH_AGENT_PID="33079" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXZPfAlg/agent.33078" SSH_AGENT_PID="33079" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXZPfAlg/agent.33078" SSH_AGENT_PID="33079" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (2.372741646s)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXZPfAlg/agent.33078" SSH_AGENT_PID="33079" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:615: *** TestDockerEnvContainerd FAILED at 2025-11-24 02:29:49.490959002 +0000 UTC m=+334.215312835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect dockerenv-637175
helpers_test.go:243: (dbg) docker inspect dockerenv-637175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fa2f82c3c1d5b684b9835bb4c38f0a99ec99948c51801886e356a322dfb8d35b",
	        "Created": "2025-11-24T02:29:16.142644588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 30501,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T02:29:16.174992054Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/fa2f82c3c1d5b684b9835bb4c38f0a99ec99948c51801886e356a322dfb8d35b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fa2f82c3c1d5b684b9835bb4c38f0a99ec99948c51801886e356a322dfb8d35b/hostname",
	        "HostsPath": "/var/lib/docker/containers/fa2f82c3c1d5b684b9835bb4c38f0a99ec99948c51801886e356a322dfb8d35b/hosts",
	        "LogPath": "/var/lib/docker/containers/fa2f82c3c1d5b684b9835bb4c38f0a99ec99948c51801886e356a322dfb8d35b/fa2f82c3c1d5b684b9835bb4c38f0a99ec99948c51801886e356a322dfb8d35b-json.log",
	        "Name": "/dockerenv-637175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "dockerenv-637175:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-637175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fa2f82c3c1d5b684b9835bb4c38f0a99ec99948c51801886e356a322dfb8d35b",
	                "LowerDir": "/var/lib/docker/overlay2/1faffaaf8e7125b9ace610d78bf170b0e58c07ac42cadaf307ed66bbe60b03d8-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1faffaaf8e7125b9ace610d78bf170b0e58c07ac42cadaf307ed66bbe60b03d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1faffaaf8e7125b9ace610d78bf170b0e58c07ac42cadaf307ed66bbe60b03d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1faffaaf8e7125b9ace610d78bf170b0e58c07ac42cadaf307ed66bbe60b03d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "dockerenv-637175",
	                "Source": "/var/lib/docker/volumes/dockerenv-637175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-637175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-637175",
	                "name.minikube.sigs.k8s.io": "dockerenv-637175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "190e06dd112287a65f040283386b5ee6bdc939725277bd42995049479a319566",
	            "SandboxKey": "/var/run/docker/netns/190e06dd1122",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32773"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32774"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32777"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32775"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32776"
	                    }
	                ]
	            },
	            "Networks": {
	                "dockerenv-637175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3833ea54a4b9fdb012bcbbde24bb330e87add2259e3b1d3f694714df6e9996a0",
	                    "EndpointID": "148a8d49f133f06161f9f1ca094f9ad4b9688dd54b21a2adce8cdd57d28ad564",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "a2:dd:7c:95:52:8c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-637175",
	                        "fa2f82c3c1d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-637175 -n dockerenv-637175
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-637175 logs -n 25
helpers_test.go:260: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                       ARGS                                                        │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons     │ addons-982350 addons disable nvidia-device-plugin --alsologtostderr -v=1                                          │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:27 UTC │ 24 Nov 25 02:27 UTC │
	│ addons     │ addons-982350 addons disable metrics-server --alsologtostderr -v=1                                                │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:27 UTC │ 24 Nov 25 02:27 UTC │
	│ addons     │ addons-982350 addons disable headlamp --alsologtostderr -v=1                                                      │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:27 UTC │ 24 Nov 25 02:28 UTC │
	│ addons     │ addons-982350 addons disable yakd --alsologtostderr -v=1                                                          │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:27 UTC │ 24 Nov 25 02:28 UTC │
	│ ip         │ addons-982350 ip                                                                                                  │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:27 UTC │ 24 Nov 25 02:27 UTC │
	│ addons     │ addons-982350 addons disable registry --alsologtostderr -v=1                                                      │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:27 UTC │ 24 Nov 25 02:27 UTC │
	│ addons     │ addons-982350 addons disable cloud-spanner --alsologtostderr -v=1                                                 │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ ssh        │ addons-982350 ssh cat /opt/local-path-provisioner/pvc-e10810fd-af61-4198-96e5-3f409eec7e8a_default_test-pvc/file1 │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ addons     │ addons-982350 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                   │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ addons     │ addons-982350 addons disable inspektor-gadget --alsologtostderr -v=1                                              │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ ssh        │ addons-982350 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                          │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ ip         │ addons-982350 ip                                                                                                  │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ addons     │ addons-982350 addons disable ingress-dns --alsologtostderr -v=1                                                   │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ addons     │ addons-982350 addons disable ingress --alsologtostderr -v=1                                                       │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-982350                                    │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ addons     │ addons-982350 addons disable registry-creds --alsologtostderr -v=1                                                │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ addons     │ addons-982350 addons disable volumesnapshots --alsologtostderr -v=1                                               │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ addons     │ addons-982350 addons disable csi-hostpath-driver --alsologtostderr -v=1                                           │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:28 UTC │
	│ stop       │ -p addons-982350                                                                                                  │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:28 UTC │ 24 Nov 25 02:29 UTC │
	│ addons     │ enable dashboard -p addons-982350                                                                                 │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:29 UTC │ 24 Nov 25 02:29 UTC │
	│ addons     │ disable dashboard -p addons-982350                                                                                │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:29 UTC │ 24 Nov 25 02:29 UTC │
	│ addons     │ disable gvisor -p addons-982350                                                                                   │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:29 UTC │ 24 Nov 25 02:29 UTC │
	│ delete     │ -p addons-982350                                                                                                  │ addons-982350    │ jenkins │ v1.37.0 │ 24 Nov 25 02:29 UTC │ 24 Nov 25 02:29 UTC │
	│ start      │ -p dockerenv-637175 --driver=docker  --container-runtime=containerd                                               │ dockerenv-637175 │ jenkins │ v1.37.0 │ 24 Nov 25 02:29 UTC │ 24 Nov 25 02:29 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-637175                                                                          │ dockerenv-637175 │ jenkins │ v1.37.0 │ 24 Nov 25 02:29 UTC │ 24 Nov 25 02:29 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:29:11
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:29:11.048302   29929 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:29:11.048384   29929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:29:11.048386   29929 out.go:374] Setting ErrFile to fd 2...
	I1124 02:29:11.048389   29929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:29:11.048582   29929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:29:11.049062   29929 out.go:368] Setting JSON to false
	I1124 02:29:11.049873   29929 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":694,"bootTime":1763950657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:29:11.049931   29929 start.go:143] virtualization: kvm guest
	I1124 02:29:11.052251   29929 out.go:179] * [dockerenv-637175] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:29:11.054093   29929 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:29:11.054096   29929 notify.go:221] Checking for updates...
	I1124 02:29:11.055582   29929 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:29:11.056912   29929 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:29:11.058430   29929 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 02:29:11.059731   29929 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:29:11.060888   29929 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:29:11.065473   29929 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:29:11.089174   29929 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:29:11.089273   29929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:29:11.146667   29929 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 02:29:11.136977188 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:29:11.146763   29929 docker.go:319] overlay module found
	I1124 02:29:11.148697   29929 out.go:179] * Using the docker driver based on user configuration
	I1124 02:29:11.149950   29929 start.go:309] selected driver: docker
	I1124 02:29:11.149959   29929 start.go:927] validating driver "docker" against <nil>
	I1124 02:29:11.149972   29929 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:29:11.150083   29929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:29:11.206944   29929 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 02:29:11.197183618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:29:11.207129   29929 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:29:11.207650   29929 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 02:29:11.207810   29929 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 02:29:11.209540   29929 out.go:179] * Using Docker driver with root privileges
	I1124 02:29:11.210830   29929 cni.go:84] Creating CNI manager for ""
	I1124 02:29:11.210899   29929 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:29:11.210905   29929 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 02:29:11.210964   29929 start.go:353] cluster config:
	{Name:dockerenv-637175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-637175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:29:11.212314   29929 out.go:179] * Starting "dockerenv-637175" primary control-plane node in "dockerenv-637175" cluster
	I1124 02:29:11.213683   29929 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 02:29:11.214867   29929 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 02:29:11.216155   29929 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 02:29:11.216181   29929 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 02:29:11.216194   29929 cache.go:65] Caching tarball of preloaded images
	I1124 02:29:11.216261   29929 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 02:29:11.216300   29929 preload.go:238] Found /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 02:29:11.216310   29929 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 02:29:11.216745   29929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/config.json ...
	I1124 02:29:11.216772   29929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/config.json: {Name:mkd479a78362ede4e96c01e79feb6b8a6884d788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:11.236563   29929 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 02:29:11.236580   29929 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 02:29:11.236594   29929 cache.go:243] Successfully downloaded all kic artifacts
	I1124 02:29:11.236625   29929 start.go:360] acquireMachinesLock for dockerenv-637175: {Name:mkc8f619857d76a80b3ca8364d2725187ccc550d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 02:29:11.236714   29929 start.go:364] duration metric: took 77.156µs to acquireMachinesLock for "dockerenv-637175"
	I1124 02:29:11.236732   29929 start.go:93] Provisioning new machine with config: &{Name:dockerenv-637175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-637175 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 02:29:11.236824   29929 start.go:125] createHost starting for "" (driver="docker")
	I1124 02:29:11.238807   29929 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1124 02:29:11.239026   29929 start.go:159] libmachine.API.Create for "dockerenv-637175" (driver="docker")
	I1124 02:29:11.239057   29929 client.go:173] LocalClient.Create starting
	I1124 02:29:11.239114   29929 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem
	I1124 02:29:11.239151   29929 main.go:143] libmachine: Decoding PEM data...
	I1124 02:29:11.239165   29929 main.go:143] libmachine: Parsing certificate...
	I1124 02:29:11.239225   29929 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem
	I1124 02:29:11.239239   29929 main.go:143] libmachine: Decoding PEM data...
	I1124 02:29:11.239246   29929 main.go:143] libmachine: Parsing certificate...
	I1124 02:29:11.239589   29929 cli_runner.go:164] Run: docker network inspect dockerenv-637175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 02:29:11.256722   29929 cli_runner.go:211] docker network inspect dockerenv-637175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 02:29:11.256809   29929 network_create.go:284] running [docker network inspect dockerenv-637175] to gather additional debugging logs...
	I1124 02:29:11.256827   29929 cli_runner.go:164] Run: docker network inspect dockerenv-637175
	W1124 02:29:11.273936   29929 cli_runner.go:211] docker network inspect dockerenv-637175 returned with exit code 1
	I1124 02:29:11.273955   29929 network_create.go:287] error running [docker network inspect dockerenv-637175]: docker network inspect dockerenv-637175: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-637175 not found
	I1124 02:29:11.273966   29929 network_create.go:289] output of [docker network inspect dockerenv-637175]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-637175 not found
	
	** /stderr **
	I1124 02:29:11.274076   29929 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 02:29:11.291771   29929 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001647e10}
	I1124 02:29:11.291818   29929 network_create.go:124] attempt to create docker network dockerenv-637175 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 02:29:11.291857   29929 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-637175 dockerenv-637175
	I1124 02:29:11.339741   29929 network_create.go:108] docker network dockerenv-637175 192.168.49.0/24 created
	I1124 02:29:11.339761   29929 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-637175" container
	I1124 02:29:11.339857   29929 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 02:29:11.356125   29929 cli_runner.go:164] Run: docker volume create dockerenv-637175 --label name.minikube.sigs.k8s.io=dockerenv-637175 --label created_by.minikube.sigs.k8s.io=true
	I1124 02:29:11.374080   29929 oci.go:103] Successfully created a docker volume dockerenv-637175
	I1124 02:29:11.374136   29929 cli_runner.go:164] Run: docker run --rm --name dockerenv-637175-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-637175 --entrypoint /usr/bin/test -v dockerenv-637175:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 02:29:11.770880   29929 oci.go:107] Successfully prepared a docker volume dockerenv-637175
	I1124 02:29:11.770935   29929 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 02:29:11.770942   29929 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 02:29:11.771006   29929 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-637175:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 02:29:16.063813   29929 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-637175:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.29273991s)
	I1124 02:29:16.063834   29929 kic.go:203] duration metric: took 4.292888923s to extract preloaded images to volume ...
	W1124 02:29:16.063934   29929 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 02:29:16.063959   29929 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 02:29:16.064008   29929 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 02:29:16.125467   29929 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-637175 --name dockerenv-637175 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-637175 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-637175 --network dockerenv-637175 --ip 192.168.49.2 --volume dockerenv-637175:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 02:29:16.413305   29929 cli_runner.go:164] Run: docker container inspect dockerenv-637175 --format={{.State.Running}}
	I1124 02:29:16.431701   29929 cli_runner.go:164] Run: docker container inspect dockerenv-637175 --format={{.State.Status}}
	I1124 02:29:16.450345   29929 cli_runner.go:164] Run: docker exec dockerenv-637175 stat /var/lib/dpkg/alternatives/iptables
	I1124 02:29:16.497818   29929 oci.go:144] the created container "dockerenv-637175" has a running status.
	I1124 02:29:16.497869   29929 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/dockerenv-637175/id_rsa...
	I1124 02:29:16.549253   29929 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-4883/.minikube/machines/dockerenv-637175/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 02:29:16.577516   29929 cli_runner.go:164] Run: docker container inspect dockerenv-637175 --format={{.State.Status}}
	I1124 02:29:16.595308   29929 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 02:29:16.595321   29929 kic_runner.go:114] Args: [docker exec --privileged dockerenv-637175 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 02:29:16.636508   29929 cli_runner.go:164] Run: docker container inspect dockerenv-637175 --format={{.State.Status}}
	I1124 02:29:16.658998   29929 machine.go:94] provisionDockerMachine start ...
	I1124 02:29:16.659071   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:16.679848   29929 main.go:143] libmachine: Using SSH client type: native
	I1124 02:29:16.680186   29929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
	I1124 02:29:16.680199   29929 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 02:29:16.680856   29929 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47642->127.0.0.1:32773: read: connection reset by peer
	I1124 02:29:19.822112   29929 main.go:143] libmachine: SSH cmd err, output: <nil>: dockerenv-637175
	
	I1124 02:29:19.822134   29929 ubuntu.go:182] provisioning hostname "dockerenv-637175"
	I1124 02:29:19.822184   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:19.840196   29929 main.go:143] libmachine: Using SSH client type: native
	I1124 02:29:19.840409   29929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
	I1124 02:29:19.840417   29929 main.go:143] libmachine: About to run SSH command:
	sudo hostname dockerenv-637175 && echo "dockerenv-637175" | sudo tee /etc/hostname
	I1124 02:29:19.988116   29929 main.go:143] libmachine: SSH cmd err, output: <nil>: dockerenv-637175
	
	I1124 02:29:19.988204   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:20.006377   29929 main.go:143] libmachine: Using SSH client type: native
	I1124 02:29:20.006604   29929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
	I1124 02:29:20.006614   29929 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-637175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-637175/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-637175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 02:29:20.145387   29929 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 02:29:20.145402   29929 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 02:29:20.145426   29929 ubuntu.go:190] setting up certificates
	I1124 02:29:20.145444   29929 provision.go:84] configureAuth start
	I1124 02:29:20.145504   29929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-637175
	I1124 02:29:20.162669   29929 provision.go:143] copyHostCerts
	I1124 02:29:20.162719   29929 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 02:29:20.162725   29929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 02:29:20.162814   29929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 02:29:20.162897   29929 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 02:29:20.162901   29929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 02:29:20.162926   29929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 02:29:20.162993   29929 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 02:29:20.162996   29929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 02:29:20.163019   29929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 02:29:20.163066   29929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.dockerenv-637175 san=[127.0.0.1 192.168.49.2 dockerenv-637175 localhost minikube]
	I1124 02:29:20.202950   29929 provision.go:177] copyRemoteCerts
	I1124 02:29:20.203003   29929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 02:29:20.203032   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:20.220466   29929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/dockerenv-637175/id_rsa Username:docker}
	I1124 02:29:20.319043   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 02:29:20.337842   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1124 02:29:20.354792   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 02:29:20.371737   29929 provision.go:87] duration metric: took 226.28145ms to configureAuth
	I1124 02:29:20.371753   29929 ubuntu.go:206] setting minikube options for container-runtime
	I1124 02:29:20.371950   29929 config.go:182] Loaded profile config "dockerenv-637175": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:29:20.371957   29929 machine.go:97] duration metric: took 3.712947202s to provisionDockerMachine
	I1124 02:29:20.371963   29929 client.go:176] duration metric: took 9.132902079s to LocalClient.Create
	I1124 02:29:20.371984   29929 start.go:167] duration metric: took 9.132958071s to libmachine.API.Create "dockerenv-637175"
	I1124 02:29:20.371991   29929 start.go:293] postStartSetup for "dockerenv-637175" (driver="docker")
	I1124 02:29:20.372000   29929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 02:29:20.372050   29929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 02:29:20.372084   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:20.389807   29929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/dockerenv-637175/id_rsa Username:docker}
	I1124 02:29:20.490700   29929 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 02:29:20.494300   29929 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 02:29:20.494313   29929 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 02:29:20.494322   29929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 02:29:20.494362   29929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 02:29:20.494377   29929 start.go:296] duration metric: took 122.381914ms for postStartSetup
	I1124 02:29:20.494643   29929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-637175
	I1124 02:29:20.512087   29929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/config.json ...
	I1124 02:29:20.512333   29929 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:29:20.512368   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:20.530091   29929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/dockerenv-637175/id_rsa Username:docker}
	I1124 02:29:20.626013   29929 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 02:29:20.630599   29929 start.go:128] duration metric: took 9.39375996s to createHost
	I1124 02:29:20.630616   29929 start.go:83] releasing machines lock for "dockerenv-637175", held for 9.393895688s
	I1124 02:29:20.630683   29929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-637175
	I1124 02:29:20.648089   29929 ssh_runner.go:195] Run: cat /version.json
	I1124 02:29:20.648132   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:20.648174   29929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 02:29:20.648233   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:20.666894   29929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/dockerenv-637175/id_rsa Username:docker}
	I1124 02:29:20.667189   29929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/dockerenv-637175/id_rsa Username:docker}
	I1124 02:29:20.821748   29929 ssh_runner.go:195] Run: systemctl --version
	I1124 02:29:20.828313   29929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 02:29:20.832771   29929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 02:29:20.832838   29929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 02:29:20.857958   29929 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 02:29:20.857971   29929 start.go:496] detecting cgroup driver to use...
	I1124 02:29:20.858001   29929 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 02:29:20.858050   29929 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 02:29:20.872025   29929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 02:29:20.884129   29929 docker.go:218] disabling cri-docker service (if available) ...
	I1124 02:29:20.884169   29929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 02:29:20.900364   29929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 02:29:20.917600   29929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 02:29:20.996238   29929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 02:29:21.083135   29929 docker.go:234] disabling docker service ...
	I1124 02:29:21.083187   29929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 02:29:21.101392   29929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 02:29:21.113819   29929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 02:29:21.195334   29929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 02:29:21.272504   29929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 02:29:21.284786   29929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 02:29:21.298520   29929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 02:29:21.308508   29929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 02:29:21.317093   29929 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 02:29:21.317209   29929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 02:29:21.325930   29929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 02:29:21.334630   29929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 02:29:21.343050   29929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 02:29:21.351817   29929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 02:29:21.359606   29929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 02:29:21.368169   29929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 02:29:21.376936   29929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 02:29:21.385504   29929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 02:29:21.392713   29929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 02:29:21.399882   29929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:29:21.478571   29929 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 02:29:21.577095   29929 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 02:29:21.577149   29929 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 02:29:21.581025   29929 start.go:564] Will wait 60s for crictl version
	I1124 02:29:21.581066   29929 ssh_runner.go:195] Run: which crictl
	I1124 02:29:21.584632   29929 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 02:29:21.609475   29929 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 02:29:21.609525   29929 ssh_runner.go:195] Run: containerd --version
	I1124 02:29:21.630587   29929 ssh_runner.go:195] Run: containerd --version
	I1124 02:29:21.652180   29929 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 02:29:21.653294   29929 cli_runner.go:164] Run: docker network inspect dockerenv-637175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 02:29:21.670184   29929 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 02:29:21.674231   29929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 02:29:21.684011   29929 kubeadm.go:884] updating cluster {Name:dockerenv-637175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-637175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 02:29:21.684104   29929 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 02:29:21.684141   29929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 02:29:21.708647   29929 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 02:29:21.708660   29929 containerd.go:534] Images already preloaded, skipping extraction
	I1124 02:29:21.708711   29929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 02:29:21.732938   29929 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 02:29:21.732963   29929 cache_images.go:86] Images are preloaded, skipping loading
	I1124 02:29:21.732970   29929 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1124 02:29:21.733060   29929 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-637175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:dockerenv-637175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 02:29:21.733105   29929 ssh_runner.go:195] Run: sudo crictl info
	I1124 02:29:21.757308   29929 cni.go:84] Creating CNI manager for ""
	I1124 02:29:21.757319   29929 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:29:21.757330   29929 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 02:29:21.757347   29929 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-637175 NodeName:dockerenv-637175 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 02:29:21.757454   29929 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-637175"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 02:29:21.757504   29929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 02:29:21.765436   29929 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 02:29:21.765486   29929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 02:29:21.773233   29929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1124 02:29:21.785297   29929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 02:29:21.800662   29929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1124 02:29:21.813530   29929 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 02:29:21.817182   29929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 02:29:21.827087   29929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:29:21.903141   29929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 02:29:21.928038   29929 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175 for IP: 192.168.49.2
	I1124 02:29:21.928050   29929 certs.go:195] generating shared ca certs ...
	I1124 02:29:21.928068   29929 certs.go:227] acquiring lock for ca certs: {Name:mkd28e9f2e8e31fe23d0ba27851eb0df56d94420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:21.928259   29929 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key
	I1124 02:29:21.928319   29929 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key
	I1124 02:29:21.928327   29929 certs.go:257] generating profile certs ...
	I1124 02:29:21.928398   29929 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/client.key
	I1124 02:29:21.928409   29929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/client.crt with IP's: []
	I1124 02:29:21.973316   29929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/client.crt ...
	I1124 02:29:21.973330   29929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/client.crt: {Name:mk4ce08c886c8bdbab90ff46ec4e8473ac82bb5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:21.973487   29929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/client.key ...
	I1124 02:29:21.973493   29929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/client.key: {Name:mk02825949289580bb06705b85efc62dc9713456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:21.973563   29929 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.key.17c03a98
	I1124 02:29:21.973574   29929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.crt.17c03a98 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 02:29:22.080009   29929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.crt.17c03a98 ...
	I1124 02:29:22.080025   29929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.crt.17c03a98: {Name:mka9e2ea488b311a8c5cb28b9eb86f2afd1668d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:22.080193   29929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.key.17c03a98 ...
	I1124 02:29:22.080208   29929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.key.17c03a98: {Name:mk7ffb2bb34f79cddf53bd6fe7a68b7eb8015752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:22.080274   29929 certs.go:382] copying /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.crt.17c03a98 -> /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.crt
	I1124 02:29:22.080357   29929 certs.go:386] copying /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.key.17c03a98 -> /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.key
	I1124 02:29:22.080416   29929 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/proxy-client.key
	I1124 02:29:22.080427   29929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/proxy-client.crt with IP's: []
	I1124 02:29:22.177461   29929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/proxy-client.crt ...
	I1124 02:29:22.177489   29929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/proxy-client.crt: {Name:mk99c3970d5beee0caa84a974b42e4ac93810ce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:22.177645   29929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/proxy-client.key ...
	I1124 02:29:22.177653   29929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/proxy-client.key: {Name:mkeeffd1b88d86f11badd45494ea7cbc00dde0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:22.177822   29929 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 02:29:22.177855   29929 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem (1078 bytes)
	I1124 02:29:22.177878   29929 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem (1123 bytes)
	I1124 02:29:22.177899   29929 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem (1679 bytes)
	I1124 02:29:22.178409   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 02:29:22.196728   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 02:29:22.214352   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 02:29:22.232513   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 02:29:22.250092   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 02:29:22.267706   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 02:29:22.285237   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 02:29:22.302198   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/dockerenv-637175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 02:29:22.319094   29929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 02:29:22.338353   29929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 02:29:22.350499   29929 ssh_runner.go:195] Run: openssl version
	I1124 02:29:22.356142   29929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 02:29:22.366797   29929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:29:22.370391   29929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:29:22.370429   29929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:29:22.403721   29929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 02:29:22.412227   29929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 02:29:22.415710   29929 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 02:29:22.415751   29929 kubeadm.go:401] StartCluster: {Name:dockerenv-637175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-637175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:29:22.415854   29929 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 02:29:22.415900   29929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:29:22.441287   29929 cri.go:89] found id: ""
	I1124 02:29:22.441348   29929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 02:29:22.449219   29929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 02:29:22.456759   29929 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 02:29:22.456831   29929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 02:29:22.464372   29929 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 02:29:22.464380   29929 kubeadm.go:158] found existing configuration files:
	
	I1124 02:29:22.464416   29929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 02:29:22.472008   29929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 02:29:22.472046   29929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 02:29:22.478974   29929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 02:29:22.486161   29929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 02:29:22.486201   29929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 02:29:22.493281   29929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 02:29:22.500477   29929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 02:29:22.500516   29929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 02:29:22.507414   29929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 02:29:22.514502   29929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 02:29:22.514552   29929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 02:29:22.521733   29929 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 02:29:22.587368   29929 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 02:29:22.645736   29929 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 02:29:34.056084   29929 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 02:29:34.056127   29929 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 02:29:34.056212   29929 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 02:29:34.056267   29929 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 02:29:34.056295   29929 kubeadm.go:319] OS: Linux
	I1124 02:29:34.056332   29929 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 02:29:34.056387   29929 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 02:29:34.056437   29929 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 02:29:34.056473   29929 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 02:29:34.056515   29929 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 02:29:34.056552   29929 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 02:29:34.056597   29929 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 02:29:34.056634   29929 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 02:29:34.056697   29929 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 02:29:34.056805   29929 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 02:29:34.056910   29929 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 02:29:34.056999   29929 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 02:29:34.058651   29929 out.go:252]   - Generating certificates and keys ...
	I1124 02:29:34.058711   29929 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 02:29:34.058773   29929 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 02:29:34.058844   29929 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 02:29:34.058888   29929 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 02:29:34.058937   29929 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 02:29:34.058976   29929 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 02:29:34.059037   29929 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 02:29:34.059146   29929 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [dockerenv-637175 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 02:29:34.059210   29929 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 02:29:34.059307   29929 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-637175 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 02:29:34.059364   29929 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 02:29:34.059414   29929 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 02:29:34.059450   29929 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 02:29:34.059495   29929 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 02:29:34.059535   29929 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 02:29:34.059579   29929 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 02:29:34.059621   29929 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 02:29:34.059673   29929 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 02:29:34.059716   29929 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 02:29:34.059823   29929 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 02:29:34.059922   29929 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 02:29:34.061267   29929 out.go:252]   - Booting up control plane ...
	I1124 02:29:34.061329   29929 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 02:29:34.061393   29929 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 02:29:34.061446   29929 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 02:29:34.061554   29929 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 02:29:34.061633   29929 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 02:29:34.061719   29929 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 02:29:34.061817   29929 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 02:29:34.061851   29929 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 02:29:34.061957   29929 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 02:29:34.062041   29929 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 02:29:34.062096   29929 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000829617s
	I1124 02:29:34.062183   29929 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 02:29:34.062248   29929 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 02:29:34.062328   29929 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 02:29:34.062396   29929 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 02:29:34.062460   29929 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.374104173s
	I1124 02:29:34.062514   29929 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.52781751s
	I1124 02:29:34.062573   29929 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501669991s
	I1124 02:29:34.062661   29929 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 02:29:34.062766   29929 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 02:29:34.062838   29929 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 02:29:34.063009   29929 kubeadm.go:319] [mark-control-plane] Marking the node dockerenv-637175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 02:29:34.063056   29929 kubeadm.go:319] [bootstrap-token] Using token: i9brsu.8rdbirdoa0tjo3s8
	I1124 02:29:34.064199   29929 out.go:252]   - Configuring RBAC rules ...
	I1124 02:29:34.064284   29929 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 02:29:34.064354   29929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 02:29:34.064480   29929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 02:29:34.064592   29929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 02:29:34.064686   29929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 02:29:34.064759   29929 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 02:29:34.064870   29929 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 02:29:34.064905   29929 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 02:29:34.064942   29929 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 02:29:34.064944   29929 kubeadm.go:319] 
	I1124 02:29:34.064998   29929 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 02:29:34.065000   29929 kubeadm.go:319] 
	I1124 02:29:34.065070   29929 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 02:29:34.065073   29929 kubeadm.go:319] 
	I1124 02:29:34.065093   29929 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 02:29:34.065153   29929 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 02:29:34.065195   29929 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 02:29:34.065198   29929 kubeadm.go:319] 
	I1124 02:29:34.065248   29929 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 02:29:34.065251   29929 kubeadm.go:319] 
	I1124 02:29:34.065293   29929 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 02:29:34.065296   29929 kubeadm.go:319] 
	I1124 02:29:34.065337   29929 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 02:29:34.065398   29929 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 02:29:34.065455   29929 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 02:29:34.065457   29929 kubeadm.go:319] 
	I1124 02:29:34.065526   29929 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 02:29:34.065587   29929 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 02:29:34.065602   29929 kubeadm.go:319] 
	I1124 02:29:34.065678   29929 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token i9brsu.8rdbirdoa0tjo3s8 \
	I1124 02:29:34.065763   29929 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5e943442c508de754e907135e9f68708045a0a18fa82619a148153bf802a361b \
	I1124 02:29:34.065802   29929 kubeadm.go:319] 	--control-plane 
	I1124 02:29:34.065806   29929 kubeadm.go:319] 
	I1124 02:29:34.065902   29929 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 02:29:34.065908   29929 kubeadm.go:319] 
	I1124 02:29:34.066029   29929 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token i9brsu.8rdbirdoa0tjo3s8 \
	I1124 02:29:34.066182   29929 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5e943442c508de754e907135e9f68708045a0a18fa82619a148153bf802a361b 
	I1124 02:29:34.066206   29929 cni.go:84] Creating CNI manager for ""
	I1124 02:29:34.066213   29929 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:29:34.067729   29929 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 02:29:34.068889   29929 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 02:29:34.073186   29929 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 02:29:34.073195   29929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 02:29:34.086520   29929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 02:29:34.288748   29929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 02:29:34.288823   29929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:29:34.288903   29929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-637175 minikube.k8s.io/updated_at=2025_11_24T02_29_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=dockerenv-637175 minikube.k8s.io/primary=true
	I1124 02:29:34.298716   29929 ops.go:34] apiserver oom_adj: -16
	I1124 02:29:34.359433   29929 kubeadm.go:1114] duration metric: took 70.681491ms to wait for elevateKubeSystemPrivileges
	I1124 02:29:34.371863   29929 kubeadm.go:403] duration metric: took 11.956109014s to StartCluster
	I1124 02:29:34.371894   29929 settings.go:142] acquiring lock: {Name:mk05d84efd831d60555ea716cd9d2a0a41871249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:34.371962   29929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:29:34.372604   29929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:29:34.372831   29929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 02:29:34.372835   29929 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 02:29:34.372905   29929 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 02:29:34.373003   29929 addons.go:70] Setting storage-provisioner=true in profile "dockerenv-637175"
	I1124 02:29:34.373022   29929 addons.go:239] Setting addon storage-provisioner=true in "dockerenv-637175"
	I1124 02:29:34.373035   29929 config.go:182] Loaded profile config "dockerenv-637175": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:29:34.373051   29929 host.go:66] Checking if "dockerenv-637175" exists ...
	I1124 02:29:34.373051   29929 addons.go:70] Setting default-storageclass=true in profile "dockerenv-637175"
	I1124 02:29:34.373068   29929 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-637175"
	I1124 02:29:34.373460   29929 cli_runner.go:164] Run: docker container inspect dockerenv-637175 --format={{.State.Status}}
	I1124 02:29:34.373638   29929 cli_runner.go:164] Run: docker container inspect dockerenv-637175 --format={{.State.Status}}
	I1124 02:29:34.374925   29929 out.go:179] * Verifying Kubernetes components...
	I1124 02:29:34.376055   29929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:29:34.399526   29929 addons.go:239] Setting addon default-storageclass=true in "dockerenv-637175"
	I1124 02:29:34.399558   29929 host.go:66] Checking if "dockerenv-637175" exists ...
	I1124 02:29:34.400033   29929 cli_runner.go:164] Run: docker container inspect dockerenv-637175 --format={{.State.Status}}
	I1124 02:29:34.401795   29929 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 02:29:34.403302   29929 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 02:29:34.403313   29929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 02:29:34.403376   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:34.428986   29929 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 02:29:34.428998   29929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 02:29:34.429096   29929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-637175
	I1124 02:29:34.430253   29929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/dockerenv-637175/id_rsa Username:docker}
	I1124 02:29:34.453106   29929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/dockerenv-637175/id_rsa Username:docker}
	I1124 02:29:34.464947   29929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 02:29:34.509124   29929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 02:29:34.542769   29929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 02:29:34.560453   29929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 02:29:34.624963   29929 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 02:29:34.625813   29929 api_server.go:52] waiting for apiserver process to appear ...
	I1124 02:29:34.625861   29929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:29:34.815825   29929 api_server.go:72] duration metric: took 442.959694ms to wait for apiserver process to appear ...
	I1124 02:29:34.815838   29929 api_server.go:88] waiting for apiserver healthz status ...
	I1124 02:29:34.815853   29929 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 02:29:34.819799   29929 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 02:29:34.820575   29929 api_server.go:141] control plane version: v1.34.1
	I1124 02:29:34.820590   29929 api_server.go:131] duration metric: took 4.746678ms to wait for apiserver health ...
	I1124 02:29:34.820599   29929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 02:29:34.822481   29929 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 02:29:34.823021   29929 system_pods.go:59] 5 kube-system pods found
	I1124 02:29:34.823045   29929 system_pods.go:61] "etcd-dockerenv-637175" [387f0e1f-3454-4e7b-81b3-32f1d78ba29e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 02:29:34.823052   29929 system_pods.go:61] "kube-apiserver-dockerenv-637175" [f653f5d3-db3e-4b2e-a741-93e3b5c653ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 02:29:34.823059   29929 system_pods.go:61] "kube-controller-manager-dockerenv-637175" [3049db26-6944-4f33-a3fb-8c685dcfca0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 02:29:34.823063   29929 system_pods.go:61] "kube-scheduler-dockerenv-637175" [03d5b950-a089-43fc-bf37-f7e9744a8aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 02:29:34.823065   29929 system_pods.go:61] "storage-provisioner" [67fa6609-c28d-4748-805a-f89fcd6ca0ef] Pending
	I1124 02:29:34.823071   29929 system_pods.go:74] duration metric: took 2.466808ms to wait for pod list to return data ...
	I1124 02:29:34.823078   29929 kubeadm.go:587] duration metric: took 450.217798ms to wait for: map[apiserver:true system_pods:true]
	I1124 02:29:34.823087   29929 node_conditions.go:102] verifying NodePressure condition ...
	I1124 02:29:34.823541   29929 addons.go:530] duration metric: took 450.645876ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 02:29:34.841025   29929 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 02:29:34.841040   29929 node_conditions.go:123] node cpu capacity is 8
	I1124 02:29:34.841055   29929 node_conditions.go:105] duration metric: took 17.965554ms to run NodePressure ...
	I1124 02:29:34.841066   29929 start.go:242] waiting for startup goroutines ...
	I1124 02:29:35.128833   29929 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-637175" context rescaled to 1 replicas
	I1124 02:29:35.128859   29929 start.go:247] waiting for cluster config update ...
	I1124 02:29:35.128881   29929 start.go:256] writing updated cluster config ...
	I1124 02:29:35.129154   29929 ssh_runner.go:195] Run: rm -f paused
	I1124 02:29:35.177416   29929 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 02:29:35.179602   29929 out.go:179] * Done! kubectl is now configured to use "dockerenv-637175" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	534ecf2bc35e8       409467f978b4a       11 seconds ago      Running             kindnet-cni               0                   89029c0e62355       kindnet-bvkb6                              kube-system
	e03a1aea5f99b       fc25172553d79       11 seconds ago      Running             kube-proxy                0                   b7cdca9bfadab       kube-proxy-l4bqr                           kube-system
	26e1f19d9085a       c80c8dbafe7dd       21 seconds ago      Running             kube-controller-manager   0                   28c771b8d9f4f       kube-controller-manager-dockerenv-637175   kube-system
	69e56949f7860       7dd6aaa1717ab       21 seconds ago      Running             kube-scheduler            0                   4cf1926d0ed49       kube-scheduler-dockerenv-637175            kube-system
	dff8eeaf0eb63       c3994bc696102       21 seconds ago      Running             kube-apiserver            0                   e5c17b1e08e9c       kube-apiserver-dockerenv-637175            kube-system
	7931bab2070e2       5f1f5298c888d       21 seconds ago      Running             etcd                      0                   9e408cf555e70       etcd-dockerenv-637175                      kube-system
	
	
	==> containerd <==
	Nov 24 02:29:37 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:37.613700286Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.025851959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-bvkb6,Uid:38dc2084-ce55-42e6-874c-95d25d66bdf5,Namespace:kube-system,Attempt:0,}"
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.041521684Z" level=info msg="connecting to shim 89029c0e6235565b5b88f5ecea516936d6db1b1f0be75fb3ebd791aef36c4a7b" address="unix:///run/containerd/s/ca3b25ff34395d77c424399f57bc7fdc463adcd7b24d6ecc8966b0168ec95d21" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.043830134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l4bqr,Uid:e5e682f4-e767-4532-8378-ed722b8c1d26,Namespace:kube-system,Attempt:0,}"
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.060044413Z" level=info msg="connecting to shim b7cdca9bfadab6bf8ee542eb6cae15f87bd1486a703a0a466dffcb7fc0c53fdc" address="unix:///run/containerd/s/90afebc53d6c9c37e6b79b6aa18a9548ead3b2dccbe51ff73a71c926651169a6" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.113171146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l4bqr,Uid:e5e682f4-e767-4532-8378-ed722b8c1d26,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7cdca9bfadab6bf8ee542eb6cae15f87bd1486a703a0a466dffcb7fc0c53fdc\""
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.118809738Z" level=info msg="CreateContainer within sandbox \"b7cdca9bfadab6bf8ee542eb6cae15f87bd1486a703a0a466dffcb7fc0c53fdc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.126162366Z" level=info msg="Container e03a1aea5f99b62409718f43f9bec98cc74fec578045f89b20f6d50d4e37c7c7: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.133903618Z" level=info msg="CreateContainer within sandbox \"b7cdca9bfadab6bf8ee542eb6cae15f87bd1486a703a0a466dffcb7fc0c53fdc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e03a1aea5f99b62409718f43f9bec98cc74fec578045f89b20f6d50d4e37c7c7\""
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.134588306Z" level=info msg="StartContainer for \"e03a1aea5f99b62409718f43f9bec98cc74fec578045f89b20f6d50d4e37c7c7\""
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.136136426Z" level=info msg="connecting to shim e03a1aea5f99b62409718f43f9bec98cc74fec578045f89b20f6d50d4e37c7c7" address="unix:///run/containerd/s/90afebc53d6c9c37e6b79b6aa18a9548ead3b2dccbe51ff73a71c926651169a6" protocol=ttrpc version=3
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.249503626Z" level=info msg="StartContainer for \"e03a1aea5f99b62409718f43f9bec98cc74fec578045f89b20f6d50d4e37c7c7\" returns successfully"
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.292523036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-bvkb6,Uid:38dc2084-ce55-42e6-874c-95d25d66bdf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"89029c0e6235565b5b88f5ecea516936d6db1b1f0be75fb3ebd791aef36c4a7b\""
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.297393277Z" level=info msg="CreateContainer within sandbox \"89029c0e6235565b5b88f5ecea516936d6db1b1f0be75fb3ebd791aef36c4a7b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.303193304Z" level=info msg="Container 534ecf2bc35e8b05caf19ef70ee26d652838fa463ae54cae6c6307b5f3c904df: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.309479358Z" level=info msg="CreateContainer within sandbox \"89029c0e6235565b5b88f5ecea516936d6db1b1f0be75fb3ebd791aef36c4a7b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"534ecf2bc35e8b05caf19ef70ee26d652838fa463ae54cae6c6307b5f3c904df\""
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.310010942Z" level=info msg="StartContainer for \"534ecf2bc35e8b05caf19ef70ee26d652838fa463ae54cae6c6307b5f3c904df\""
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.311073086Z" level=info msg="connecting to shim 534ecf2bc35e8b05caf19ef70ee26d652838fa463ae54cae6c6307b5f3c904df" address="unix:///run/containerd/s/ca3b25ff34395d77c424399f57bc7fdc463adcd7b24d6ecc8966b0168ec95d21" protocol=ttrpc version=3
	Nov 24 02:29:39 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:39.406661555Z" level=info msg="StartContainer for \"534ecf2bc35e8b05caf19ef70ee26d652838fa463ae54cae6c6307b5f3c904df\" returns successfully"
	Nov 24 02:29:49 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:49.883480194Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Nov 24 02:29:49 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:49.883587109Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Nov 24 02:29:50 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:50.317826722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:67fa6609-c28d-4748-805a-f89fcd6ca0ef,Namespace:kube-system,Attempt:0,}"
	Nov 24 02:29:50 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:50.320487821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6gf5f,Uid:6f152ae3-917b-4ea0-a95e-941584de5e5a,Namespace:kube-system,Attempt:0,}"
	Nov 24 02:29:50 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:50.339803630Z" level=info msg="connecting to shim a2c75e4a65d00642961af9581ebd48db2de51bc4c81fb5ffd7fc39b391a46969" address="unix:///run/containerd/s/13301776115cfa24eecad933e10194aa9dcd63ce554446a635ee40bdfaafc923" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 02:29:50 dockerenv-637175 containerd[661]: time="2025-11-24T02:29:50.363526094Z" level=info msg="connecting to shim 1eb9e3a8d9d03c0413c08aef5a2ee987701f40d55b9a9122a2f5f17d195f2b0b" address="unix:///run/containerd/s/9fb27ef30b56e84d0e3374fdb48784e97411afe5126011a8c286e9c1d55ecb29" namespace=k8s.io protocol=ttrpc version=3
	
	
	==> describe nodes <==
	Name:               dockerenv-637175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-637175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=dockerenv-637175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T02_29_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 02:29:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-637175
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 02:29:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 02:29:49 +0000   Mon, 24 Nov 2025 02:29:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 02:29:49 +0000   Mon, 24 Nov 2025 02:29:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 02:29:49 +0000   Mon, 24 Nov 2025 02:29:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 02:29:49 +0000   Mon, 24 Nov 2025 02:29:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-637175
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                c0c5b339-f395-45b7-84c8-83919483d743
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6gf5f                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12s
	  kube-system                 etcd-dockerenv-637175                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17s
	  kube-system                 kindnet-bvkb6                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12s
	  kube-system                 kube-apiserver-dockerenv-637175             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-dockerenv-637175    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-l4bqr                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kube-scheduler-dockerenv-637175             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11s   kube-proxy       
	  Normal  Starting                 17s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17s   kubelet          Node dockerenv-637175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s   kubelet          Node dockerenv-637175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s   kubelet          Node dockerenv-637175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s   node-controller  Node dockerenv-637175 event: Registered Node dockerenv-637175 in Controller
	  Normal  NodeReady                1s    kubelet          Node dockerenv-637175 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [7931bab2070e2f0b4acbb7709f55d09bcb54b57d17be2354e70639d67f283d57] <==
	{"level":"warn","ts":"2025-11-24T02:29:30.049245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.057472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.064884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.071063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.077222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.087208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.095004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.101942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.109116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.116877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.123868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.130983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.138240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.144923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.151127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.157814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.163630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.169644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.175685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.184244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.195981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.199352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.206204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.212400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:29:30.265066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57530","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:29:50 up 12 min,  0 user,  load average: 0.74, 0.90, 0.47
	Linux dockerenv-637175 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [534ecf2bc35e8b05caf19ef70ee26d652838fa463ae54cae6c6307b5f3c904df] <==
	I1124 02:29:39.677716       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 02:29:39.678039       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 02:29:39.678202       1 main.go:148] setting mtu 1500 for CNI 
	I1124 02:29:39.678223       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 02:29:39.678249       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T02:29:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 02:29:39.880362       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 02:29:39.880921       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 02:29:39.880979       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 02:29:39.881190       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 02:29:40.181285       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 02:29:40.181309       1 metrics.go:72] Registering metrics
	I1124 02:29:40.181393       1 controller.go:711] "Syncing nftables rules"
	I1124 02:29:49.882883       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:29:49.882957       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dff8eeaf0eb6393df1f17b7f7f27c2fb17cb5ea3e8ac45a2a3d310494f2ef0d2] <==
	I1124 02:29:30.721674       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 02:29:30.721715       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 02:29:30.723310       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 02:29:30.727181       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 02:29:30.727308       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 02:29:30.734032       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 02:29:30.734300       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 02:29:30.914518       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 02:29:31.625640       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 02:29:31.630338       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 02:29:31.630358       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 02:29:32.103466       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 02:29:32.138704       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 02:29:32.229973       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 02:29:32.235818       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1124 02:29:32.236822       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 02:29:32.240623       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 02:29:32.647169       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 02:29:33.458487       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 02:29:33.469125       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 02:29:33.475633       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 02:29:38.301872       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 02:29:38.305681       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 02:29:38.648329       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 02:29:38.699501       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [26e1f19d9085ae4c21adeb1d05b0ff2b1a85df36d966771612465d5bd8c8f01b] <==
	I1124 02:29:37.608331       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="dockerenv-637175" podCIDRs=["10.244.0.0/24"]
	I1124 02:29:37.645011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 02:29:37.646221       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 02:29:37.646281       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 02:29:37.646296       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 02:29:37.646333       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 02:29:37.646384       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 02:29:37.646431       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 02:29:37.646437       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 02:29:37.646547       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 02:29:37.646624       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 02:29:37.646703       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 02:29:37.646706       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="dockerenv-637175"
	I1124 02:29:37.646742       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 02:29:37.646747       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 02:29:37.646808       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 02:29:37.646772       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 02:29:37.646954       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 02:29:37.648707       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 02:29:37.649852       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 02:29:37.652661       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:29:37.655067       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:29:37.658621       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 02:29:37.666477       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 02:29:37.668733       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e03a1aea5f99b62409718f43f9bec98cc74fec578045f89b20f6d50d4e37c7c7] <==
	I1124 02:29:39.280134       1 server_linux.go:53] "Using iptables proxy"
	I1124 02:29:39.347519       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:29:39.448441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:29:39.448515       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:29:39.448655       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:29:39.471643       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:29:39.471698       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:29:39.476884       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:29:39.477351       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:29:39.477386       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:29:39.479486       1 config.go:200] "Starting service config controller"
	I1124 02:29:39.479516       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:29:39.479539       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:29:39.479545       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:29:39.479560       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:29:39.479566       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:29:39.480012       1 config.go:309] "Starting node config controller"
	I1124 02:29:39.480034       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:29:39.480043       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:29:39.580567       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 02:29:39.580602       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:29:39.580629       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [69e56949f786025b7bc51d1a1c6dcf75d92a9f45acc0b4a87fbe7805d154e4ae] <==
	E1124 02:29:30.670746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:29:30.670614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:29:30.670664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:29:30.670733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:29:30.670634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:29:30.670759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:29:30.670819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:29:30.670843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:29:30.670855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:29:30.670942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:29:30.671011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:29:30.671025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:29:30.671056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:29:30.671088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:29:31.535897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:29:31.600600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:29:31.601593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:29:31.634038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:29:31.659370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:29:31.670013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:29:31.725201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:29:31.740275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:29:31.777813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:29:32.001198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 02:29:35.168174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 02:29:34 dockerenv-637175 kubelet[1441]: E1124 02:29:34.312319    1441 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-dockerenv-637175\" already exists" pod="kube-system/kube-apiserver-dockerenv-637175"
	Nov 24 02:29:34 dockerenv-637175 kubelet[1441]: E1124 02:29:34.312319    1441 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-dockerenv-637175\" already exists" pod="kube-system/kube-scheduler-dockerenv-637175"
	Nov 24 02:29:34 dockerenv-637175 kubelet[1441]: E1124 02:29:34.312660    1441 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-dockerenv-637175\" already exists" pod="kube-system/etcd-dockerenv-637175"
	Nov 24 02:29:34 dockerenv-637175 kubelet[1441]: E1124 02:29:34.313024    1441 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-dockerenv-637175\" already exists" pod="kube-system/kube-controller-manager-dockerenv-637175"
	Nov 24 02:29:34 dockerenv-637175 kubelet[1441]: I1124 02:29:34.336553    1441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-dockerenv-637175" podStartSLOduration=1.336529805 podStartE2EDuration="1.336529805s" podCreationTimestamp="2025-11-24 02:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 02:29:34.336379557 +0000 UTC m=+1.133095955" watchObservedRunningTime="2025-11-24 02:29:34.336529805 +0000 UTC m=+1.133246228"
	Nov 24 02:29:34 dockerenv-637175 kubelet[1441]: I1124 02:29:34.336730    1441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-dockerenv-637175" podStartSLOduration=1.336721521 podStartE2EDuration="1.336721521s" podCreationTimestamp="2025-11-24 02:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 02:29:34.328283609 +0000 UTC m=+1.125000015" watchObservedRunningTime="2025-11-24 02:29:34.336721521 +0000 UTC m=+1.133437900"
	Nov 24 02:29:34 dockerenv-637175 kubelet[1441]: I1124 02:29:34.351708    1441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-dockerenv-637175" podStartSLOduration=1.351688893 podStartE2EDuration="1.351688893s" podCreationTimestamp="2025-11-24 02:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 02:29:34.351360495 +0000 UTC m=+1.148076899" watchObservedRunningTime="2025-11-24 02:29:34.351688893 +0000 UTC m=+1.148405262"
	Nov 24 02:29:34 dockerenv-637175 kubelet[1441]: I1124 02:29:34.360233    1441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-dockerenv-637175" podStartSLOduration=1.3602142640000001 podStartE2EDuration="1.360214264s" podCreationTimestamp="2025-11-24 02:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 02:29:34.36017886 +0000 UTC m=+1.156895261" watchObservedRunningTime="2025-11-24 02:29:34.360214264 +0000 UTC m=+1.156930643"
	Nov 24 02:29:37 dockerenv-637175 kubelet[1441]: I1124 02:29:37.613236    1441 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 02:29:37 dockerenv-637175 kubelet[1441]: I1124 02:29:37.613958    1441 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 02:29:38 dockerenv-637175 kubelet[1441]: I1124 02:29:38.808454    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/38dc2084-ce55-42e6-874c-95d25d66bdf5-cni-cfg\") pod \"kindnet-bvkb6\" (UID: \"38dc2084-ce55-42e6-874c-95d25d66bdf5\") " pod="kube-system/kindnet-bvkb6"
	Nov 24 02:29:38 dockerenv-637175 kubelet[1441]: I1124 02:29:38.808509    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e5e682f4-e767-4532-8378-ed722b8c1d26-kube-proxy\") pod \"kube-proxy-l4bqr\" (UID: \"e5e682f4-e767-4532-8378-ed722b8c1d26\") " pod="kube-system/kube-proxy-l4bqr"
	Nov 24 02:29:38 dockerenv-637175 kubelet[1441]: I1124 02:29:38.808528    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38dc2084-ce55-42e6-874c-95d25d66bdf5-lib-modules\") pod \"kindnet-bvkb6\" (UID: \"38dc2084-ce55-42e6-874c-95d25d66bdf5\") " pod="kube-system/kindnet-bvkb6"
	Nov 24 02:29:38 dockerenv-637175 kubelet[1441]: I1124 02:29:38.808547    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jjsv\" (UniqueName: \"kubernetes.io/projected/38dc2084-ce55-42e6-874c-95d25d66bdf5-kube-api-access-2jjsv\") pod \"kindnet-bvkb6\" (UID: \"38dc2084-ce55-42e6-874c-95d25d66bdf5\") " pod="kube-system/kindnet-bvkb6"
	Nov 24 02:29:38 dockerenv-637175 kubelet[1441]: I1124 02:29:38.808595    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38dc2084-ce55-42e6-874c-95d25d66bdf5-xtables-lock\") pod \"kindnet-bvkb6\" (UID: \"38dc2084-ce55-42e6-874c-95d25d66bdf5\") " pod="kube-system/kindnet-bvkb6"
	Nov 24 02:29:38 dockerenv-637175 kubelet[1441]: I1124 02:29:38.808623    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhbq\" (UniqueName: \"kubernetes.io/projected/e5e682f4-e767-4532-8378-ed722b8c1d26-kube-api-access-6mhbq\") pod \"kube-proxy-l4bqr\" (UID: \"e5e682f4-e767-4532-8378-ed722b8c1d26\") " pod="kube-system/kube-proxy-l4bqr"
	Nov 24 02:29:38 dockerenv-637175 kubelet[1441]: I1124 02:29:38.808664    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5e682f4-e767-4532-8378-ed722b8c1d26-xtables-lock\") pod \"kube-proxy-l4bqr\" (UID: \"e5e682f4-e767-4532-8378-ed722b8c1d26\") " pod="kube-system/kube-proxy-l4bqr"
	Nov 24 02:29:38 dockerenv-637175 kubelet[1441]: I1124 02:29:38.808706    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5e682f4-e767-4532-8378-ed722b8c1d26-lib-modules\") pod \"kube-proxy-l4bqr\" (UID: \"e5e682f4-e767-4532-8378-ed722b8c1d26\") " pod="kube-system/kube-proxy-l4bqr"
	Nov 24 02:29:40 dockerenv-637175 kubelet[1441]: I1124 02:29:40.330149    1441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bvkb6" podStartSLOduration=2.330119176 podStartE2EDuration="2.330119176s" podCreationTimestamp="2025-11-24 02:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 02:29:40.329975899 +0000 UTC m=+7.126692300" watchObservedRunningTime="2025-11-24 02:29:40.330119176 +0000 UTC m=+7.126835561"
	Nov 24 02:29:40 dockerenv-637175 kubelet[1441]: I1124 02:29:40.330286    1441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l4bqr" podStartSLOduration=2.330278298 podStartE2EDuration="2.330278298s" podCreationTimestamp="2025-11-24 02:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 02:29:39.331281387 +0000 UTC m=+6.127997792" watchObservedRunningTime="2025-11-24 02:29:40.330278298 +0000 UTC m=+7.126994682"
	Nov 24 02:29:49 dockerenv-637175 kubelet[1441]: I1124 02:29:49.975158    1441 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 02:29:50 dockerenv-637175 kubelet[1441]: I1124 02:29:50.092729    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/67fa6609-c28d-4748-805a-f89fcd6ca0ef-tmp\") pod \"storage-provisioner\" (UID: \"67fa6609-c28d-4748-805a-f89fcd6ca0ef\") " pod="kube-system/storage-provisioner"
	Nov 24 02:29:50 dockerenv-637175 kubelet[1441]: I1124 02:29:50.092800    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f152ae3-917b-4ea0-a95e-941584de5e5a-config-volume\") pod \"coredns-66bc5c9577-6gf5f\" (UID: \"6f152ae3-917b-4ea0-a95e-941584de5e5a\") " pod="kube-system/coredns-66bc5c9577-6gf5f"
	Nov 24 02:29:50 dockerenv-637175 kubelet[1441]: I1124 02:29:50.092896    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwhgq\" (UniqueName: \"kubernetes.io/projected/67fa6609-c28d-4748-805a-f89fcd6ca0ef-kube-api-access-gwhgq\") pod \"storage-provisioner\" (UID: \"67fa6609-c28d-4748-805a-f89fcd6ca0ef\") " pod="kube-system/storage-provisioner"
	Nov 24 02:29:50 dockerenv-637175 kubelet[1441]: I1124 02:29:50.092929    1441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7frd5\" (UniqueName: \"kubernetes.io/projected/6f152ae3-917b-4ea0-a95e-941584de5e5a-kube-api-access-7frd5\") pod \"coredns-66bc5c9577-6gf5f\" (UID: \"6f152ae3-917b-4ea0-a95e-941584de5e5a\") " pod="kube-system/coredns-66bc5c9577-6gf5f"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-637175 -n dockerenv-637175
helpers_test.go:269: (dbg) Run:  kubectl --context dockerenv-637175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6gf5f storage-provisioner
helpers_test.go:282: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context dockerenv-637175 describe pod coredns-66bc5c9577-6gf5f storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context dockerenv-637175 describe pod coredns-66bc5c9577-6gf5f storage-provisioner: exit status 1 (58.152974ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6gf5f" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context dockerenv-637175 describe pod coredns-66bc5c9577-6gf5f storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-637175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-637175
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-637175: (2.380158313s)
--- FAIL: TestDockerEnvContainerd (42.64s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-524458 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-524458 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-524458 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-524458 --alsologtostderr -v=1] stderr:
I1124 02:32:04.973493   50173 out.go:360] Setting OutFile to fd 1 ...
I1124 02:32:04.973761   50173 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:04.973785   50173 out.go:374] Setting ErrFile to fd 2...
I1124 02:32:04.973793   50173 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:04.974075   50173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
I1124 02:32:04.974409   50173 mustload.go:66] Loading cluster: functional-524458
I1124 02:32:04.974976   50173 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:32:04.975473   50173 cli_runner.go:164] Run: docker container inspect functional-524458 --format={{.State.Status}}
I1124 02:32:04.997831   50173 host.go:66] Checking if "functional-524458" exists ...
I1124 02:32:04.998171   50173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1124 02:32:05.078512   50173 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:05.062340694 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1124 02:32:05.078676   50173 api_server.go:166] Checking apiserver status ...
I1124 02:32:05.078727   50173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1124 02:32:05.078773   50173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-524458
I1124 02:32:05.107438   50173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/functional-524458/id_rsa Username:docker}
I1124 02:32:05.221703   50173 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4970/cgroup
W1124 02:32:05.230873   50173 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4970/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1124 02:32:05.230935   50173 ssh_runner.go:195] Run: ls
I1124 02:32:05.236265   50173 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1124 02:32:05.243242   50173 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1124 02:32:05.243297   50173 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1124 02:32:05.243432   50173 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:32:05.243441   50173 addons.go:70] Setting dashboard=true in profile "functional-524458"
I1124 02:32:05.243450   50173 addons.go:239] Setting addon dashboard=true in "functional-524458"
I1124 02:32:05.243470   50173 host.go:66] Checking if "functional-524458" exists ...
I1124 02:32:05.243833   50173 cli_runner.go:164] Run: docker container inspect functional-524458 --format={{.State.Status}}
I1124 02:32:05.266586   50173 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1124 02:32:05.268125   50173 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1124 02:32:05.269391   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1124 02:32:05.269411   50173 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1124 02:32:05.269472   50173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-524458
I1124 02:32:05.291288   50173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/functional-524458/id_rsa Username:docker}
I1124 02:32:05.399885   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1124 02:32:05.399909   50173 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1124 02:32:05.413428   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1124 02:32:05.413457   50173 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1124 02:32:05.426884   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1124 02:32:05.426908   50173 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1124 02:32:05.440760   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1124 02:32:05.440796   50173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1124 02:32:05.455963   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1124 02:32:05.455989   50173 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1124 02:32:05.470321   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1124 02:32:05.470346   50173 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1124 02:32:05.484538   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1124 02:32:05.484560   50173 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1124 02:32:05.498189   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1124 02:32:05.498215   50173 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1124 02:32:05.511545   50173 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1124 02:32:05.511570   50173 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1124 02:32:05.527642   50173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1124 02:32:06.089582   50173 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-524458 addons enable metrics-server

                                                
                                                
I1124 02:32:06.093181   50173 addons.go:202] Writing out "functional-524458" config to set dashboard=true...
W1124 02:32:06.093492   50173 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1124 02:32:06.094217   50173 kapi.go:59] client config for functional-524458: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.key", CAFile:"/home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1124 02:32:06.094724   50173 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1124 02:32:06.094751   50173 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1124 02:32:06.094759   50173 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1124 02:32:06.094771   50173 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1124 02:32:06.094794   50173 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1124 02:32:06.103286   50173 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  94785469-8667-432f-b12d-4340c2eb5100 644 0 2025-11-24 02:32:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-24 02:32:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.50.142,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.50.142],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1124 02:32:06.103432   50173 out.go:285] * Launching proxy ...
* Launching proxy ...
I1124 02:32:06.103518   50173 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-524458 proxy --port 36195]
I1124 02:32:06.103796   50173 dashboard.go:159] Waiting for kubectl to output host:port ...
I1124 02:32:06.153717   50173 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1124 02:32:06.153797   50173 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1124 02:32:06.163678   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[868af540-849d-4005-837c-66a1008ecea2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc0007287c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024f040 TLS:<nil>}
I1124 02:32:06.163792   50173 retry.go:31] will retry after 64.942µs: Temporary Error: unexpected response code: 503
I1124 02:32:06.167920   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[30dc5a1a-8646-4c24-a2c3-1ab86284d855] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000a9ba80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2500 TLS:<nil>}
I1124 02:32:06.168078   50173 retry.go:31] will retry after 95.123µs: Temporary Error: unexpected response code: 503
I1124 02:32:06.172264   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9700d70b-2795-48e8-8be7-c6fbc8aaebb5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000797680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2640 TLS:<nil>}
I1124 02:32:06.172319   50173 retry.go:31] will retry after 334.577µs: Temporary Error: unexpected response code: 503
I1124 02:32:06.176228   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36ec82ce-229f-478a-b28f-10783838a09c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000797740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024f180 TLS:<nil>}
I1124 02:32:06.176286   50173 retry.go:31] will retry after 217.203µs: Temporary Error: unexpected response code: 503
I1124 02:32:06.180093   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c599dbc1-1987-463c-90f7-71b6011c2497] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000a9bc40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024f2c0 TLS:<nil>}
I1124 02:32:06.180150   50173 retry.go:31] will retry after 422.991µs: Temporary Error: unexpected response code: 503
I1124 02:32:06.183936   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a9aa1ff-9588-42ab-a358-76ac3a4addc3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000888080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d28c0 TLS:<nil>}
I1124 02:32:06.183978   50173 retry.go:31] will retry after 1.05033ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.187603   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[717deef1-2f19-4877-a89b-32688b8cbfa7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc0007288c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2a00 TLS:<nil>}
I1124 02:32:06.187659   50173 retry.go:31] will retry after 1.4974ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.192349   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d678c16f-ad68-409e-8d9c-b5729c5fd09d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000888140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00030e3c0 TLS:<nil>}
I1124 02:32:06.192401   50173 retry.go:31] will retry after 1.747719ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.197347   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7280c98c-8f4b-419e-9b49-3641ee23bd33] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000728a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2b40 TLS:<nil>}
I1124 02:32:06.197414   50173 retry.go:31] will retry after 2.035541ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.202147   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[776bf7ce-9141-467c-9c22-67d3b778eed7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000797880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00030e500 TLS:<nil>}
I1124 02:32:06.202198   50173 retry.go:31] will retry after 2.381609ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.207932   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2345410c-26f4-4593-82e5-8fcddeba035f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000728b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024f400 TLS:<nil>}
I1124 02:32:06.207998   50173 retry.go:31] will retry after 2.966927ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.213639   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d917affd-f7b0-4035-84af-f4cb5bc6edd3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000797980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00030e780 TLS:<nil>}
I1124 02:32:06.213698   50173 retry.go:31] will retry after 9.369971ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.226987   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93ed2d71-de4e-4be6-8374-85784c57c0e0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000888240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024f540 TLS:<nil>}
I1124 02:32:06.227059   50173 retry.go:31] will retry after 13.924755ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.245222   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2535c6f1-cbaf-4015-8c4e-d9b40766e570] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000728c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2c80 TLS:<nil>}
I1124 02:32:06.245294   50173 retry.go:31] will retry after 25.405709ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.274866   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b18d335-61f5-4cd4-b5e0-3af79eeb4dcd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000728cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00030e8c0 TLS:<nil>}
I1124 02:32:06.274919   50173 retry.go:31] will retry after 20.664176ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.301275   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2516599f-1780-4a4c-925e-b87266b1d35b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000888380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00030ea00 TLS:<nil>}
I1124 02:32:06.301376   50173 retry.go:31] will retry after 41.009332ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.345887   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[87d9fa73-8cc8-4a65-9e89-6c8a6db22109] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000797a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2dc0 TLS:<nil>}
I1124 02:32:06.345958   50173 retry.go:31] will retry after 34.649026ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.385524   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ac22139-9a9d-4ffc-8071-98d38d8fe4dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000728dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024f680 TLS:<nil>}
I1124 02:32:06.385603   50173 retry.go:31] will retry after 128.693907ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.517873   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6e8b8f05-d088-4adc-a344-eda4d2fd9057] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000797b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00030eb40 TLS:<nil>}
I1124 02:32:06.517940   50173 retry.go:31] will retry after 135.820866ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.657085   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[deb549b5-e9a4-4cb0-abc6-a73dd30a07d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc0008884c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024f7c0 TLS:<nil>}
I1124 02:32:06.657235   50173 retry.go:31] will retry after 257.335203ms: Temporary Error: unexpected response code: 503
I1124 02:32:06.918439   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f549701f-05b0-4a86-816f-9f4b39d4cfa1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:06 GMT]] Body:0xc000797c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2f00 TLS:<nil>}
I1124 02:32:06.918493   50173 retry.go:31] will retry after 419.542706ms: Temporary Error: unexpected response code: 503
I1124 02:32:07.342415   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[32e1edb9-bcd3-401a-8572-d04b78e214d5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:07 GMT]] Body:0xc0008885c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024f900 TLS:<nil>}
I1124 02:32:07.342507   50173 retry.go:31] will retry after 508.079439ms: Temporary Error: unexpected response code: 503
I1124 02:32:07.854974   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96e8d47d-790e-4c45-8009-284dcb299e98] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:07 GMT]] Body:0xc000888680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3040 TLS:<nil>}
I1124 02:32:07.855035   50173 retry.go:31] will retry after 638.777327ms: Temporary Error: unexpected response code: 503
I1124 02:32:08.497127   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[46e05fad-ec20-440b-81b4-a3a7996c2e36] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:08 GMT]] Body:0xc000797e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3180 TLS:<nil>}
I1124 02:32:08.497194   50173 retry.go:31] will retry after 1.560001092s: Temporary Error: unexpected response code: 503
I1124 02:32:10.060636   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e5beb119-c6ad-4ad8-b859-ee3b2365d4b1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:10 GMT]] Body:0xc000888740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00030ec80 TLS:<nil>}
I1124 02:32:10.060702   50173 retry.go:31] will retry after 1.956255673s: Temporary Error: unexpected response code: 503
I1124 02:32:12.021039   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ca86a9a4-81d7-42ff-927d-30e6c41f2cc5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:12 GMT]] Body:0xc0008887c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3680 TLS:<nil>}
I1124 02:32:12.021109   50173 retry.go:31] will retry after 2.038300997s: Temporary Error: unexpected response code: 503
I1124 02:32:14.062588   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[80b11cc6-52d8-452f-ac31-a7f32f78f630] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:14 GMT]] Body:0xc000797f00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3a40 TLS:<nil>}
I1124 02:32:14.062642   50173 retry.go:31] will retry after 3.790478316s: Temporary Error: unexpected response code: 503
I1124 02:32:17.857538   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9489a8c0-2e0e-4f89-93eb-d03e4673fffc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:17 GMT]] Body:0xc0008e6540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00030edc0 TLS:<nil>}
I1124 02:32:17.857603   50173 retry.go:31] will retry after 3.858114153s: Temporary Error: unexpected response code: 503
I1124 02:32:21.718740   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0e067aff-3bfd-4564-9880-a9723659c2b6] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:21 GMT]] Body:0xc000728f80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024fa40 TLS:<nil>}
I1124 02:32:21.718843   50173 retry.go:31] will retry after 7.847368878s: Temporary Error: unexpected response code: 503
I1124 02:32:29.571813   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[221dd3b5-58d4-415f-8984-7f6b0bf38060] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:29 GMT]] Body:0xc0008e6600 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d3e00 TLS:<nil>}
I1124 02:32:29.571875   50173 retry.go:31] will retry after 13.524409142s: Temporary Error: unexpected response code: 503
I1124 02:32:43.101435   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f9a49bd8-ac79-4f22-8ced-f76484bcb341] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:32:43 GMT]] Body:0xc000888980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024fb80 TLS:<nil>}
I1124 02:32:43.101499   50173 retry.go:31] will retry after 24.299306702s: Temporary Error: unexpected response code: 503
I1124 02:33:07.404492   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[471a8e37-c333-4509-8e77-d9b6b072fff9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:33:07 GMT]] Body:0xc000888a00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00030ef00 TLS:<nil>}
I1124 02:33:07.404549   50173 retry.go:31] will retry after 26.114829066s: Temporary Error: unexpected response code: 503
I1124 02:33:33.525571   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[239f976a-d1bf-4f37-a29e-eaaa958791f0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:33:33 GMT]] Body:0xc0008e6a00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00041c000 TLS:<nil>}
I1124 02:33:33.525640   50173 retry.go:31] will retry after 29.115254532s: Temporary Error: unexpected response code: 503
I1124 02:34:02.646292   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[13a1a0b5-5dd2-4c3a-96a2-f78b7dcf71f6] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:34:02 GMT]] Body:0xc000888b00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024fcc0 TLS:<nil>}
I1124 02:34:02.646348   50173 retry.go:31] will retry after 1m5.191142053s: Temporary Error: unexpected response code: 503
I1124 02:35:07.841398   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b6ee1785-40a5-43d0-8542-de54cc3b1515] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:35:07 GMT]] Body:0xc0008e6580 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00041c140 TLS:<nil>}
I1124 02:35:07.841465   50173 retry.go:31] will retry after 48.076470167s: Temporary Error: unexpected response code: 503
I1124 02:35:55.921426   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9032aa4c-5b38-4f1f-97c9-8d8c82409ae0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:35:55 GMT]] Body:0xc0008880c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00024e3c0 TLS:<nil>}
I1124 02:35:55.921509   50173 retry.go:31] will retry after 53.221684632s: Temporary Error: unexpected response code: 503
I1124 02:36:49.148169   50173 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fc897a99-a030-4a83-b9bd-c64cc092f1aa] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 24 Nov 2025 02:36:49 GMT]] Body:0xc000888080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00041c500 TLS:<nil>}
I1124 02:36:49.148262   50173 retry.go:31] will retry after 49.611619654s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-524458
helpers_test.go:243: (dbg) docker inspect functional-524458:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34",
	        "Created": "2025-11-24T02:30:28.925146241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40439,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T02:30:28.96111684Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/hosts",
	        "LogPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34-json.log",
	        "Name": "/functional-524458",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-524458:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-524458",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34",
	                "LowerDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-524458",
	                "Source": "/var/lib/docker/volumes/functional-524458/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-524458",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-524458",
	                "name.minikube.sigs.k8s.io": "functional-524458",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b7e0113de159bb7f20d80f3f8f3ea57d04b5854af723c36f353c1401899bee04",
	            "SandboxKey": "/var/run/docker/netns/b7e0113de159",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-524458": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7ce998af76f9bf37a9b0b37e8dc03d8566ef5a726be1278dc8886354dffa2129",
	                    "EndpointID": "e57019316db23c37637b8f4e72b83f56be989c49058967b2c1d7a721d73ffb4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "72:bd:14:22:6d:10",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-524458",
	                        "8f46810d4481"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-524458 -n functional-524458
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 logs -n 25: (1.20487593s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-524458 ssh -- ls -la /mount-9p                                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh cat /mount-9p/test-1763951524960886758                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ addons  │ functional-524458 addons list                                                                                                     │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ addons  │ functional-524458 addons list -o json                                                                                             │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh echo hello                                                                                                  │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh cat /etc/hostname                                                                                           │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ tunnel  │ functional-524458 tunnel --alsologtostderr                                                                                        │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ tunnel  │ functional-524458 tunnel --alsologtostderr                                                                                        │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ tunnel  │ functional-524458 tunnel --alsologtostderr                                                                                        │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ ssh     │ functional-524458 ssh stat /mount-9p/created-by-test                                                                              │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh stat /mount-9p/created-by-pod                                                                               │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh sudo umount -f /mount-9p                                                                                    │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ mount   │ -p functional-524458 /tmp/TestFunctionalparallelMountCmdspecific-port1198358865/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ ssh     │ functional-524458 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh -- ls -la /mount-9p                                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh sudo umount -f /mount-9p                                                                                    │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ mount   │ -p functional-524458 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1264968832/001:/mount2 --alsologtostderr -v=1                │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ mount   │ -p functional-524458 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1264968832/001:/mount3 --alsologtostderr -v=1                │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ ssh     │ functional-524458 ssh findmnt -T /mount1                                                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ mount   │ -p functional-524458 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1264968832/001:/mount1 --alsologtostderr -v=1                │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ ssh     │ functional-524458 ssh findmnt -T /mount1                                                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh findmnt -T /mount2                                                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh findmnt -T /mount3                                                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ mount   │ -p functional-524458 --kill=true                                                                                                  │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:32:04
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:32:04.712497   49906 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:04.712948   49906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.712960   49906 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:04.712966   49906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.713312   49906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:32:04.713858   49906 out.go:368] Setting JSON to false
	I1124 02:32:04.715081   49906 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":868,"bootTime":1763950657,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:32:04.715153   49906 start.go:143] virtualization: kvm guest
	I1124 02:32:04.716957   49906 out.go:179] * [functional-524458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:32:04.718285   49906 notify.go:221] Checking for updates...
	I1124 02:32:04.718332   49906 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:32:04.719589   49906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:32:04.720934   49906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:32:04.722032   49906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 02:32:04.723392   49906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:32:04.724722   49906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:32:04.726193   49906 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:32:04.726692   49906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:32:04.751591   49906 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:32:04.751738   49906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.812419   49906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.802268406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.812559   49906 docker.go:319] overlay module found
	I1124 02:32:04.814655   49906 out.go:179] * Using the docker driver based on existing profile
	I1124 02:32:04.815752   49906 start.go:309] selected driver: docker
	I1124 02:32:04.815794   49906 start.go:927] validating driver "docker" against &{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.815939   49906 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:32:04.816055   49906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.889898   49906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.876051797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.890729   49906 cni.go:84] Creating CNI manager for ""
	I1124 02:32:04.890846   49906 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:32:04.890914   49906 start.go:353] cluster config:
	{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.893578   49906 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f4209b6719c49       56cc512116c8f       4 minutes ago       Exited              mount-munger              0                   56e203a84c75b       busybox-mount                               default
	2d6aad34e22f6       6e38f40d628db       5 minutes ago       Running             storage-provisioner       2                   04364561b4944       storage-provisioner                         kube-system
	ea021386d9aa8       c3994bc696102       5 minutes ago       Running             kube-apiserver            0                   f549fcd0e6bde       kube-apiserver-functional-524458            kube-system
	2402aa0a440d9       5f1f5298c888d       5 minutes ago       Running             etcd                      1                   755542b390469       etcd-functional-524458                      kube-system
	3c37f8c32c41e       c80c8dbafe7dd       5 minutes ago       Running             kube-controller-manager   2                   32624cec026c9       kube-controller-manager-functional-524458   kube-system
	727f77f614fb0       52546a367cc9e       5 minutes ago       Running             coredns                   1                   9655c54274a11       coredns-66bc5c9577-vm5lj                    kube-system
	35bec471e4f9f       fc25172553d79       5 minutes ago       Running             kube-proxy                1                   e45eacf9b156d       kube-proxy-fpnq6                            kube-system
	69ec8eb7d8059       409467f978b4a       5 minutes ago       Running             kindnet-cni               1                   9a6c08ca602bb       kindnet-z2hwm                               kube-system
	cabebfa1d5c87       c80c8dbafe7dd       5 minutes ago       Exited              kube-controller-manager   1                   32624cec026c9       kube-controller-manager-functional-524458   kube-system
	33d9520aecf65       7dd6aaa1717ab       5 minutes ago       Running             kube-scheduler            1                   0f09143310ce8       kube-scheduler-functional-524458            kube-system
	1933b021444ba       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       1                   04364561b4944       storage-provisioner                         kube-system
	ff1f2401e0888       52546a367cc9e       6 minutes ago       Exited              coredns                   0                   9655c54274a11       coredns-66bc5c9577-vm5lj                    kube-system
	a4e33a61af8cc       409467f978b4a       6 minutes ago       Exited              kindnet-cni               0                   9a6c08ca602bb       kindnet-z2hwm                               kube-system
	bcaa0dfec6478       fc25172553d79       6 minutes ago       Exited              kube-proxy                0                   e45eacf9b156d       kube-proxy-fpnq6                            kube-system
	011ce34e2a265       7dd6aaa1717ab       6 minutes ago       Exited              kube-scheduler            0                   0f09143310ce8       kube-scheduler-functional-524458            kube-system
	9d4e9836cae55       5f1f5298c888d       6 minutes ago       Exited              etcd                      0                   755542b390469       etcd-functional-524458                      kube-system
	
	
	==> containerd <==
	Nov 24 02:36:26 functional-524458 containerd[3791]: time="2025-11-24T02:36:26.416168295Z" level=info msg="container event discarded" container=cabebfa1d5c877bf3cd69d20bc91b4623549e427424bf2698cfa5885285e48a4 type=CONTAINER_STARTED_EVENT
	Nov 24 02:36:26 functional-524458 containerd[3791]: time="2025-11-24T02:36:26.427400488Z" level=info msg="container event discarded" container=1933b021444ba331525fa058f0fd57fefe0a1dc2f1ad2bfe07daf3d4de6d2b40 type=CONTAINER_STOPPED_EVENT
	Nov 24 02:36:26 functional-524458 containerd[3791]: time="2025-11-24T02:36:26.449507695Z" level=info msg="container event discarded" container=35bec471e4f9f67e63e215b9a948fc13ea1c482c87ba3da7c89537d2b954fc21 type=CONTAINER_STARTED_EVENT
	Nov 24 02:36:27 functional-524458 containerd[3791]: time="2025-11-24T02:36:27.314178568Z" level=info msg="container event discarded" container=9165629e83b6d12f483e6423ef22d41061407831464c7d88c07381e7ef55f51e type=CONTAINER_DELETED_EVENT
	Nov 24 02:36:35 functional-524458 containerd[3791]: time="2025-11-24T02:36:35.941244511Z" level=info msg="container event discarded" container=a8023462e891a8bc59384048c0412553ee2cb83f191f7865c50e70a47f2c5e26 type=CONTAINER_STOPPED_EVENT
	Nov 24 02:36:36 functional-524458 containerd[3791]: time="2025-11-24T02:36:36.001690214Z" level=info msg="container event discarded" container=9d4e9836cae55bbedae9f6e86b045334f2599454b4d798c441d5ec93f6c930af type=CONTAINER_STOPPED_EVENT
	Nov 24 02:36:37 functional-524458 containerd[3791]: time="2025-11-24T02:36:37.596166814Z" level=info msg="container event discarded" container=cabebfa1d5c877bf3cd69d20bc91b4623549e427424bf2698cfa5885285e48a4 type=CONTAINER_STOPPED_EVENT
	Nov 24 02:36:37 functional-524458 containerd[3791]: time="2025-11-24T02:36:37.654099379Z" level=info msg="container event discarded" container=e02e4b9a9e6274d5ddc5eddf6e0ea052e997b9257f4835610f6701f99afdef55 type=CONTAINER_DELETED_EVENT
	Nov 24 02:36:38 functional-524458 containerd[3791]: time="2025-11-24T02:36:38.083771212Z" level=info msg="container event discarded" container=2402aa0a440d9433b29a82464bb9b8fc9be1875f342295ed598b56c3c455966c type=CONTAINER_CREATED_EVENT
	Nov 24 02:36:38 functional-524458 containerd[3791]: time="2025-11-24T02:36:38.083853935Z" level=info msg="container event discarded" container=3c37f8c32c41ed1a47767957e0e11d8a45a9a5e520e681dfe0b2c1244b78c872 type=CONTAINER_CREATED_EVENT
	Nov 24 02:36:38 functional-524458 containerd[3791]: time="2025-11-24T02:36:38.168163929Z" level=info msg="container event discarded" container=f549fcd0e6bdedf78d5f8796cd061ed46f80061fae6afc1570ac088f668f644b type=CONTAINER_CREATED_EVENT
	Nov 24 02:36:38 functional-524458 containerd[3791]: time="2025-11-24T02:36:38.168226149Z" level=info msg="container event discarded" container=f549fcd0e6bdedf78d5f8796cd061ed46f80061fae6afc1570ac088f668f644b type=CONTAINER_STARTED_EVENT
	Nov 24 02:36:38 functional-524458 containerd[3791]: time="2025-11-24T02:36:38.168240163Z" level=info msg="container event discarded" container=2402aa0a440d9433b29a82464bb9b8fc9be1875f342295ed598b56c3c455966c type=CONTAINER_STARTED_EVENT
	Nov 24 02:36:38 functional-524458 containerd[3791]: time="2025-11-24T02:36:38.168247991Z" level=info msg="container event discarded" container=3c37f8c32c41ed1a47767957e0e11d8a45a9a5e520e681dfe0b2c1244b78c872 type=CONTAINER_STARTED_EVENT
	Nov 24 02:36:38 functional-524458 containerd[3791]: time="2025-11-24T02:36:38.190429456Z" level=info msg="container event discarded" container=ea021386d9aa8a8c91cb8a7d06750b8bc3c8ae4984b484b36e172d2d51607ca6 type=CONTAINER_CREATED_EVENT
	Nov 24 02:36:38 functional-524458 containerd[3791]: time="2025-11-24T02:36:38.273513508Z" level=info msg="container event discarded" container=ea021386d9aa8a8c91cb8a7d06750b8bc3c8ae4984b484b36e172d2d51607ca6 type=CONTAINER_STARTED_EVENT
	Nov 24 02:36:39 functional-524458 containerd[3791]: time="2025-11-24T02:36:39.740629499Z" level=info msg="container event discarded" container=5d81c35dc13eeae2bdf1ea71e233063b39566097abc8503931e7ed5f9f999c27 type=CONTAINER_STOPPED_EVENT
	Nov 24 02:36:39 functional-524458 containerd[3791]: time="2025-11-24T02:36:39.964189486Z" level=info msg="container event discarded" container=2d6aad34e22f65a6adde2c1908faa771c84b1c7108f288e562d127f56a671c37 type=CONTAINER_CREATED_EVENT
	Nov 24 02:36:40 functional-524458 containerd[3791]: time="2025-11-24T02:36:40.017477382Z" level=info msg="container event discarded" container=2d6aad34e22f65a6adde2c1908faa771c84b1c7108f288e562d127f56a671c37 type=CONTAINER_STARTED_EVENT
	Nov 24 02:36:40 functional-524458 containerd[3791]: time="2025-11-24T02:36:40.714007397Z" level=info msg="container event discarded" container=a8023462e891a8bc59384048c0412553ee2cb83f191f7865c50e70a47f2c5e26 type=CONTAINER_DELETED_EVENT
	Nov 24 02:37:00 functional-524458 containerd[3791]: time="2025-11-24T02:37:00.457319892Z" level=info msg="container event discarded" container=4a1bd2d9214bfa209f3833bdc14e0213413996e285268e580cfc0deff338873a type=CONTAINER_CREATED_EVENT
	Nov 24 02:37:00 functional-524458 containerd[3791]: time="2025-11-24T02:37:00.457431609Z" level=info msg="container event discarded" container=4a1bd2d9214bfa209f3833bdc14e0213413996e285268e580cfc0deff338873a type=CONTAINER_STARTED_EVENT
	Nov 24 02:37:03 functional-524458 containerd[3791]: time="2025-11-24T02:37:03.509485329Z" level=info msg="container event discarded" container=4a1bd2d9214bfa209f3833bdc14e0213413996e285268e580cfc0deff338873a type=CONTAINER_STOPPED_EVENT
	Nov 24 02:37:04 functional-524458 containerd[3791]: time="2025-11-24T02:37:04.332415606Z" level=info msg="container event discarded" container=e68c0b5ce769e7ecb114676ca333f2ff30c97bb92d54663c50479c593135acd8 type=CONTAINER_CREATED_EVENT
	Nov 24 02:37:04 functional-524458 containerd[3791]: time="2025-11-24T02:37:04.332483773Z" level=info msg="container event discarded" container=e68c0b5ce769e7ecb114676ca333f2ff30c97bb92d54663c50479c593135acd8 type=CONTAINER_STARTED_EVENT
	
	
	==> coredns [727f77f614fb0a0b55f6253486a5a0fde92abd053c3b6f96e7486e7c98748d27] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38803 - 57960 "HINFO IN 5576012265632122714.157634741446322758. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026213698s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ff1f2401e08887269b9ebfd8fd03528e5039f46f83d4774cb9fb801caa36a503] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47722 - 43225 "HINFO IN 3493354150572541863.540786576554480922. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.091381734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-524458
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-524458
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=functional-524458
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T02_30_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 02:30:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-524458
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 02:36:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 02:32:41 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 02:32:41 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 02:32:41 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 02:32:41 +0000   Mon, 24 Nov 2025 02:31:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-524458
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                abdae176-c8fb-4f16-9193-b297c7e2de4f
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8t9t8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 coredns-66bc5c9577-vm5lj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m17s
	  kube-system                 etcd-functional-524458                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m22s
	  kube-system                 kindnet-z2hwm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m17s
	  kube-system                 kube-apiserver-functional-524458              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-controller-manager-functional-524458     200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-proxy-fpnq6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-functional-524458              100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-dbcxf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-88tpq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m15s                  kube-proxy       
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m22s                  kubelet          Node functional-524458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m22s                  kubelet          Node functional-524458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s                  kubelet          Node functional-524458 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m22s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m18s                  node-controller  Node functional-524458 event: Registered Node functional-524458 in Controller
	  Normal  NodeReady                6m5s                   kubelet          Node functional-524458 status is now: NodeReady
	  Normal  Starting                 5m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m29s (x8 over 5m29s)  kubelet          Node functional-524458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s (x8 over 5m29s)  kubelet          Node functional-524458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s (x7 over 5m29s)  kubelet          Node functional-524458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node functional-524458 event: Registered Node functional-524458 in Controller
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [2402aa0a440d9433b29a82464bb9b8fc9be1875f342295ed598b56c3c455966c] <==
	{"level":"warn","ts":"2025-11-24T02:31:38.981980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:38.988596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.003701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.011607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.017725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.024097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.030213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.036498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.042249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.049380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.062954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.069320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.075380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.081771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.087969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.094342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.101674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.107927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.120463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.126657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.132754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.146244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.152525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.159115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.211463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	
	
	==> etcd [9d4e9836cae55bbedae9f6e86b045334f2599454b4d798c441d5ec93f6c930af] <==
	{"level":"warn","ts":"2025-11-24T02:30:41.152953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.163687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.169222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.184769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.191917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.198031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.239271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37594","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:31:35.938040Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T02:31:35.938123Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-524458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T02:31:35.938224Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:35.939802Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:35.939876Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.939895Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-24T02:31:35.939971Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T02:31:35.939975Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T02:31:35.940018Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:35.940028Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T02:31:35.940028Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:35.940046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.940033Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T02:31:35.940027Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T02:31:35.942031Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T02:31:35.942094Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.942124Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T02:31:35.942136Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-524458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:37:06 up 19 min,  0 user,  load average: 0.05, 0.38, 0.40
	Linux functional-524458 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [69ec8eb7d80599bf65d41066aa2aee4949f2acfa4261127f4c76f4644245664c] <==
	I1124 02:34:56.771402       1 main.go:301] handling current node
	I1124 02:35:06.765524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:35:06.765553       1 main.go:301] handling current node
	I1124 02:35:16.767606       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:35:16.767639       1 main.go:301] handling current node
	I1124 02:35:26.767843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:35:26.767879       1 main.go:301] handling current node
	I1124 02:35:36.764698       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:35:36.764731       1 main.go:301] handling current node
	I1124 02:35:46.765498       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:35:46.765548       1 main.go:301] handling current node
	I1124 02:35:56.765064       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:35:56.765094       1 main.go:301] handling current node
	I1124 02:36:06.764705       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:36:06.764748       1 main.go:301] handling current node
	I1124 02:36:16.764507       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:36:16.764547       1 main.go:301] handling current node
	I1124 02:36:26.765769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:36:26.765837       1 main.go:301] handling current node
	I1124 02:36:36.764560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:36:36.764598       1 main.go:301] handling current node
	I1124 02:36:46.764541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:36:46.764573       1 main.go:301] handling current node
	I1124 02:36:56.766106       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:36:56.766143       1 main.go:301] handling current node
	
	
	==> kindnet [a4e33a61af8ccbb5eaccab81c84cbce715cf4d1a4b518dc59c2c36603f42d57b] <==
	I1124 02:30:50.961261       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 02:30:50.961496       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 02:30:50.961632       1 main.go:148] setting mtu 1500 for CNI 
	I1124 02:30:50.961649       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 02:30:50.961677       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T02:30:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 02:30:51.256361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 02:30:51.256419       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 02:30:51.256434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 02:30:51.256582       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 02:30:51.556556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 02:30:51.556585       1 metrics.go:72] Registering metrics
	I1124 02:30:51.556626       1 controller.go:711] "Syncing nftables rules"
	I1124 02:31:01.166865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:01.166931       1 main.go:301] handling current node
	I1124 02:31:11.173260       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:11.173293       1 main.go:301] handling current node
	I1124 02:31:21.165849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:21.165882       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ea021386d9aa8a8c91cb8a7d06750b8bc3c8ae4984b484b36e172d2d51607ca6] <==
	I1124 02:31:39.664584       1 aggregator.go:171] initial CRD sync complete...
	I1124 02:31:39.665026       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 02:31:39.665033       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 02:31:39.665039       1 cache.go:39] Caches are synced for autoregister controller
	I1124 02:31:39.665112       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 02:31:39.665156       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 02:31:39.671136       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 02:31:39.687077       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 02:31:39.789384       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 02:31:39.789384       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 02:31:40.567874       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1124 02:31:40.774123       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1124 02:31:40.775592       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 02:31:40.781324       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 02:31:41.502163       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 02:31:41.596463       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 02:31:41.651094       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 02:31:41.658579       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 02:31:42.998188       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 02:32:00.019748       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.68.228"}
	I1124 02:32:03.955744       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.172.10"}
	I1124 02:32:05.906877       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 02:32:06.068465       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.50.142"}
	I1124 02:32:06.082826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.246.74"}
	I1124 02:32:08.388293       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.90.125"}
	
	
	==> kube-controller-manager [3c37f8c32c41ed1a47767957e0e11d8a45a9a5e520e681dfe0b2c1244b78c872] <==
	I1124 02:31:42.971334       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-524458"
	I1124 02:31:42.971430       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 02:31:42.992772       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 02:31:42.992836       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 02:31:42.992849       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 02:31:42.992943       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 02:31:42.992975       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 02:31:42.993031       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 02:31:42.993071       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 02:31:42.993080       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 02:31:42.993088       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 02:31:42.993194       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 02:31:42.993367       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 02:31:42.993528       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 02:31:42.999458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:31:43.007691       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 02:31:43.010002       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 02:31:43.014358       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 02:32:05.973912       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.983036       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.984232       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.989896       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.991546       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.996597       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.998241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [cabebfa1d5c877bf3cd69d20bc91b4623549e427424bf2698cfa5885285e48a4] <==
	I1124 02:31:27.176289       1 serving.go:386] Generated self-signed cert in-memory
	I1124 02:31:27.518820       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1124 02:31:27.518843       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:27.520250       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1124 02:31:27.520295       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1124 02:31:27.520657       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1124 02:31:27.520685       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 02:31:37.523230       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [35bec471e4f9f67e63e215b9a948fc13ea1c482c87ba3da7c89537d2b954fc21] <==
	I1124 02:31:26.546883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1124 02:31:26.547817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:27.584158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:30.628675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:36.426381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1124 02:31:46.348002       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:31:46.348054       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:31:46.348172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:31:46.370611       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:31:46.370675       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:31:46.376507       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:31:46.376952       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:31:46.376970       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:46.378288       1 config.go:200] "Starting service config controller"
	I1124 02:31:46.378312       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:31:46.378299       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:31:46.378358       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:31:46.378397       1 config.go:309] "Starting node config controller"
	I1124 02:31:46.378411       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:31:46.378436       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:31:46.378441       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:31:46.478518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:31:46.478573       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:31:46.478574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 02:31:46.478599       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [bcaa0dfec6478b7b3d78ebb52e18e001772dbf1fb031bbb5f5aeee3f6f2e047b] <==
	I1124 02:30:50.589509       1 server_linux.go:53] "Using iptables proxy"
	I1124 02:30:50.649864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:30:50.750262       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:30:50.750315       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:30:50.750429       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:30:50.775421       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:30:50.775498       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:30:50.781395       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:30:50.781709       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:30:50.781726       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:30:50.783193       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:30:50.783226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:30:50.783251       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:30:50.783250       1 config.go:200] "Starting service config controller"
	I1124 02:30:50.783257       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:30:50.783263       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:30:50.783275       1 config.go:309] "Starting node config controller"
	I1124 02:30:50.783283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:30:50.783291       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:30:50.884070       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:30:50.884192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:30:50.884259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [011ce34e2a26582eb45146eecd7a13e0eecbbfb40048e91c82e8204398646303] <==
	E1124 02:30:41.648514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:30:41.648522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:30:41.648547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:30:41.648550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:30:41.648654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:30:41.648675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:30:41.648745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:30:41.648760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:42.465005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:30:42.515396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:30:42.523703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:30:42.529861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:30:42.625483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:30:42.628516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:42.666309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:30:42.679452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:30:42.825835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:30:42.900062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1124 02:30:45.044111       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.794834       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 02:31:25.794902       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.795210       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 02:31:25.795233       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 02:31:25.795368       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 02:31:25.795398       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [33d9520aecf65cccdf85716e396cb41948cafd4c476f6950b9b870412cbad9b5] <==
	E1124 02:31:32.107552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:31:32.246428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:31:32.256066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:31:32.279681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:31:32.484611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:34.338665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:31:34.646649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:31:35.195266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:31:35.223753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:31:35.322629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:31:35.443983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:31:35.698518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:31:36.005382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:31:36.379704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:31:36.402500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:36.675413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:31:36.904160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:31:36.909692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:31:37.085287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:31:37.194098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:31:37.273123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:31:37.322954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:31:38.185146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:31:39.583259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1124 02:31:46.047493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 02:35:57 functional-524458 kubelet[4784]: E1124 02:35:57.655630    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:36:01 functional-524458 kubelet[4784]: E1124 02:36:01.656101    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:36:05 functional-524458 kubelet[4784]: E1124 02:36:05.651828    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:36:09 functional-524458 kubelet[4784]: E1124 02:36:09.651929    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:36:09 functional-524458 kubelet[4784]: E1124 02:36:09.651952    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:36:12 functional-524458 kubelet[4784]: E1124 02:36:12.651954    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	Nov 24 02:36:14 functional-524458 kubelet[4784]: E1124 02:36:14.651894    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:36:19 functional-524458 kubelet[4784]: E1124 02:36:19.651949    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:36:21 functional-524458 kubelet[4784]: E1124 02:36:21.652268    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:36:23 functional-524458 kubelet[4784]: E1124 02:36:23.652272    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:36:25 functional-524458 kubelet[4784]: E1124 02:36:25.652295    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	Nov 24 02:36:25 functional-524458 kubelet[4784]: E1124 02:36:25.652711    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:36:30 functional-524458 kubelet[4784]: E1124 02:36:30.651981    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:36:32 functional-524458 kubelet[4784]: E1124 02:36:32.652338    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:36:34 functional-524458 kubelet[4784]: E1124 02:36:34.652389    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:36:39 functional-524458 kubelet[4784]: E1124 02:36:39.651499    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	Nov 24 02:36:39 functional-524458 kubelet[4784]: E1124 02:36:39.651834    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:36:44 functional-524458 kubelet[4784]: E1124 02:36:44.651522    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:36:44 functional-524458 kubelet[4784]: E1124 02:36:44.652421    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:36:46 functional-524458 kubelet[4784]: E1124 02:36:46.652635    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:36:52 functional-524458 kubelet[4784]: E1124 02:36:52.651208    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	Nov 24 02:36:53 functional-524458 kubelet[4784]: E1124 02:36:53.652270    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:36:57 functional-524458 kubelet[4784]: E1124 02:36:57.652282    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:36:57 functional-524458 kubelet[4784]: E1124 02:36:57.652402    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:36:59 functional-524458 kubelet[4784]: E1124 02:36:59.651872    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	
	
	==> storage-provisioner [1933b021444ba331525fa058f0fd57fefe0a1dc2f1ad2bfe07daf3d4de6d2b40] <==
	I1124 02:31:26.369727       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 02:31:26.371564       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [2d6aad34e22f65a6adde2c1908faa771c84b1c7108f288e562d127f56a671c37] <==
	W1124 02:36:40.493343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:42.496426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:42.500118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:44.503497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:44.508212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:46.512047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:46.515835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:48.518857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:48.523683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:50.527087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:50.530924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:52.534058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:52.537907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:54.541033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:54.544926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:56.547974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:56.551653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:58.555018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:36:58.559207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:37:00.561913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:37:00.566497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:37:02.569900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:37:02.573513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:37:04.576798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:37:04.581267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-524458 -n functional-524458
helpers_test.go:269: (dbg) Run:  kubectl --context functional-524458 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-8t9t8 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq: exit status 1 (86.946029ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:06 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://f4209b6719c49eceae60f8419a324de5ece633779675ba9d470e3b8d2a06797c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 02:32:13 +0000
	      Finished:     Mon, 24 Nov 2025 02:32:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vglvc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vglvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m1s   default-scheduler  Successfully assigned default/busybox-mount to functional-524458
	  Normal  Pulling    5m     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m54s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.088s (6.425s including waiting). Image size: 2395207 bytes.
	  Normal  Created    4m54s  kubelet            Created container: mount-munger
	  Normal  Started    4m54s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-8t9t8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:03 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l4v7c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l4v7c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  5m4s  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8t9t8 to functional-524458
	  Warning  Failed     5m1s  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  115s (x5 over 5m3s)   kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   113s (x5 over 5m1s)   kubelet  Error: ErrImagePull
	  Warning  Failed   113s (x4 over 4m46s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   55s (x15 over 5m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  1s (x19 over 5m)   kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5g4km (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5g4km:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age    From               Message
	  ----     ------     ----   ----               -------
	  Normal   Scheduled  4m59s  default-scheduler  Successfully assigned default/nginx-svc to functional-524458
	  Warning  Failed     4m51s  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  118s (x5 over 4m59s)  kubelet  Pulling image "docker.io/nginx:alpine"
	  Warning  Failed   116s (x5 over 4m51s)  kubelet  Error: ErrImagePull
	  Warning  Failed   116s (x4 over 4m35s)  kubelet  Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   42s (x15 over 4m51s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  14s (x17 over 4m51s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:26 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5dn22 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5dn22:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m41s                default-scheduler  Successfully assigned default/sp-pod to functional-524458
	  Normal   Pulling    89s (x5 over 4m41s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     87s (x5 over 4m38s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   87s (x5 over 4m38s)   kubelet  Error: ErrImagePull
	  Warning  Failed   37s (x15 over 4m38s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  8s (x17 over 4m38s)   kubelet  Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-dbcxf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-88tpq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-524458 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-524458 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bm6nh" [dd29dc9d-2fde-4547-87b6-959980d93ebc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I1124 02:37:13.376045    8429 retry.go:31] will retry after 50.787912826s: Temporary Error: Get "http:": http: no Host in request URL
E1124 02:37:14.218989    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-524458 -n functional-524458
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-24 02:47:07.519640411 +0000 UTC m=+1372.243994228
functional_test.go:1645: (dbg) Run:  kubectl --context functional-524458 describe po hello-node-connect-7d85dfc575-bm6nh -n default
functional_test.go:1645: (dbg) kubectl --context functional-524458 describe po hello-node-connect-7d85dfc575-bm6nh -n default:
Name:             hello-node-connect-7d85dfc575-bm6nh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-524458/192.168.49.2
Start Time:       Mon, 24 Nov 2025 02:37:07 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85j9g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-85j9g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age    From               Message
----     ------     ----   ----               -------
Normal   Scheduled  10m    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bm6nh to functional-524458
Warning  Failed     9m57s  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  7m (x5 over 10m)       kubelet  Pulling image "kicbase/echo-server"
Warning  Failed   6m58s (x5 over 9m57s)  kubelet  Error: ErrImagePull
Warning  Failed   6m58s (x4 over 9m43s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   4m45s (x20 over 9m57s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m33s (x21 over 9m57s)  kubelet  Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-524458 logs hello-node-connect-7d85dfc575-bm6nh -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-524458 logs hello-node-connect-7d85dfc575-bm6nh -n default: exit status 1 (73.434333ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-bm6nh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-524458 logs hello-node-connect-7d85dfc575-bm6nh -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-524458 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-bm6nh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-524458/192.168.49.2
Start Time:       Mon, 24 Nov 2025 02:37:07 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85j9g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-85j9g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age    From               Message
----     ------     ----   ----               -------
Normal   Scheduled  10m    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bm6nh to functional-524458
Warning  Failed     9m57s  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  7m (x5 over 10m)       kubelet  Pulling image "kicbase/echo-server"
Warning  Failed   6m58s (x5 over 9m57s)  kubelet  Error: ErrImagePull
Warning  Failed   6m58s (x4 over 9m43s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   4m45s (x20 over 9m57s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m33s (x21 over 9m57s)  kubelet  Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-524458 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-524458 logs -l app=hello-node-connect: exit status 1 (63.994603ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-bm6nh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-524458 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-524458 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.160.243
IPs:                      10.110.160.243
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31991/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-524458
helpers_test.go:243: (dbg) docker inspect functional-524458:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34",
	        "Created": "2025-11-24T02:30:28.925146241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40439,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T02:30:28.96111684Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/hosts",
	        "LogPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34-json.log",
	        "Name": "/functional-524458",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-524458:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-524458",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34",
	                "LowerDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-524458",
	                "Source": "/var/lib/docker/volumes/functional-524458/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-524458",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-524458",
	                "name.minikube.sigs.k8s.io": "functional-524458",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b7e0113de159bb7f20d80f3f8f3ea57d04b5854af723c36f353c1401899bee04",
	            "SandboxKey": "/var/run/docker/netns/b7e0113de159",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-524458": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7ce998af76f9bf37a9b0b37e8dc03d8566ef5a726be1278dc8886354dffa2129",
	                    "EndpointID": "e57019316db23c37637b8f4e72b83f56be989c49058967b2c1d7a721d73ffb4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "72:bd:14:22:6d:10",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-524458",
	                        "8f46810d4481"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-524458 -n functional-524458
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 logs -n 25: (1.249974603s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                              ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-524458 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls                                                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image save --daemon kicbase/echo-server:functional-524458 --alsologtostderr                                   │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/test/nested/copy/8429/hosts                                                                 │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/ssl/certs/8429.pem                                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /usr/share/ca-certificates/8429.pem                                                              │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/ssl/certs/51391683.0                                                                        │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/ssl/certs/84292.pem                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /usr/share/ca-certificates/84292.pem                                                             │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                        │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls --format short --alsologtostderr                                                                     │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls --format yaml --alsologtostderr                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh pgrep buildkitd                                                                                           │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │                     │
	│ image          │ functional-524458 image build -t localhost/my-image:functional-524458 testdata/build --alsologtostderr                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls                                                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls --format json --alsologtostderr                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls --format table --alsologtostderr                                                                     │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ update-context │ functional-524458 update-context --alsologtostderr -v=2                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ update-context │ functional-524458 update-context --alsologtostderr -v=2                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ update-context │ functional-524458 update-context --alsologtostderr -v=2                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ service        │ functional-524458 service list                                                                                                  │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │ 24 Nov 25 02:42 UTC │
	│ service        │ functional-524458 service list -o json                                                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │ 24 Nov 25 02:42 UTC │
	│ service        │ functional-524458 service --namespace=default --https --url hello-node                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │                     │
	│ service        │ functional-524458 service hello-node --url --format={{.IP}}                                                                     │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │                     │
	│ service        │ functional-524458 service hello-node --url                                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:32:04
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:32:04.712497   49906 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:04.712948   49906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.712960   49906 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:04.712966   49906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.713312   49906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:32:04.713858   49906 out.go:368] Setting JSON to false
	I1124 02:32:04.715081   49906 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":868,"bootTime":1763950657,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:32:04.715153   49906 start.go:143] virtualization: kvm guest
	I1124 02:32:04.716957   49906 out.go:179] * [functional-524458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:32:04.718285   49906 notify.go:221] Checking for updates...
	I1124 02:32:04.718332   49906 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:32:04.719589   49906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:32:04.720934   49906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:32:04.722032   49906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 02:32:04.723392   49906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:32:04.724722   49906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:32:04.726193   49906 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:32:04.726692   49906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:32:04.751591   49906 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:32:04.751738   49906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.812419   49906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.802268406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.812559   49906 docker.go:319] overlay module found
	I1124 02:32:04.814655   49906 out.go:179] * Using the docker driver based on existing profile
	I1124 02:32:04.815752   49906 start.go:309] selected driver: docker
	I1124 02:32:04.815794   49906 start.go:927] validating driver "docker" against &{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.815939   49906 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:32:04.816055   49906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.889898   49906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.876051797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.890729   49906 cni.go:84] Creating CNI manager for ""
	I1124 02:32:04.890846   49906 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:32:04.890914   49906 start.go:353] cluster config:
	{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.893578   49906 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f4209b6719c49       56cc512116c8f       14 minutes ago      Exited              mount-munger              0                   56e203a84c75b       busybox-mount                               default
	2d6aad34e22f6       6e38f40d628db       15 minutes ago      Running             storage-provisioner       2                   04364561b4944       storage-provisioner                         kube-system
	ea021386d9aa8       c3994bc696102       15 minutes ago      Running             kube-apiserver            0                   f549fcd0e6bde       kube-apiserver-functional-524458            kube-system
	2402aa0a440d9       5f1f5298c888d       15 minutes ago      Running             etcd                      1                   755542b390469       etcd-functional-524458                      kube-system
	3c37f8c32c41e       c80c8dbafe7dd       15 minutes ago      Running             kube-controller-manager   2                   32624cec026c9       kube-controller-manager-functional-524458   kube-system
	727f77f614fb0       52546a367cc9e       15 minutes ago      Running             coredns                   1                   9655c54274a11       coredns-66bc5c9577-vm5lj                    kube-system
	35bec471e4f9f       fc25172553d79       15 minutes ago      Running             kube-proxy                1                   e45eacf9b156d       kube-proxy-fpnq6                            kube-system
	69ec8eb7d8059       409467f978b4a       15 minutes ago      Running             kindnet-cni               1                   9a6c08ca602bb       kindnet-z2hwm                               kube-system
	cabebfa1d5c87       c80c8dbafe7dd       15 minutes ago      Exited              kube-controller-manager   1                   32624cec026c9       kube-controller-manager-functional-524458   kube-system
	33d9520aecf65       7dd6aaa1717ab       15 minutes ago      Running             kube-scheduler            1                   0f09143310ce8       kube-scheduler-functional-524458            kube-system
	1933b021444ba       6e38f40d628db       15 minutes ago      Exited              storage-provisioner       1                   04364561b4944       storage-provisioner                         kube-system
	ff1f2401e0888       52546a367cc9e       16 minutes ago      Exited              coredns                   0                   9655c54274a11       coredns-66bc5c9577-vm5lj                    kube-system
	a4e33a61af8cc       409467f978b4a       16 minutes ago      Exited              kindnet-cni               0                   9a6c08ca602bb       kindnet-z2hwm                               kube-system
	bcaa0dfec6478       fc25172553d79       16 minutes ago      Exited              kube-proxy                0                   e45eacf9b156d       kube-proxy-fpnq6                            kube-system
	011ce34e2a265       7dd6aaa1717ab       16 minutes ago      Exited              kube-scheduler            0                   0f09143310ce8       kube-scheduler-functional-524458            kube-system
	9d4e9836cae55       5f1f5298c888d       16 minutes ago      Exited              etcd                      0                   755542b390469       etcd-functional-524458                      kube-system
	
	
	==> containerd <==
	Nov 24 02:42:07 functional-524458 containerd[3791]: time="2025-11-24T02:42:07.542967095Z" level=info msg="container event discarded" container=aa1534120d2fa6f9e3ace2db37100b43c192c24113447d39b0aba66e324a15ed type=CONTAINER_CREATED_EVENT
	Nov 24 02:42:07 functional-524458 containerd[3791]: time="2025-11-24T02:42:07.543053815Z" level=info msg="container event discarded" container=aa1534120d2fa6f9e3ace2db37100b43c192c24113447d39b0aba66e324a15ed type=CONTAINER_STARTED_EVENT
	Nov 24 02:42:59 functional-524458 containerd[3791]: time="2025-11-24T02:42:59.652099128Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Nov 24 02:43:02 functional-524458 containerd[3791]: time="2025-11-24T02:43:02.263983022Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:43:02 functional-524458 containerd[3791]: time="2025-11-24T02:43:02.264023524Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=11740"
	Nov 24 02:43:02 functional-524458 containerd[3791]: time="2025-11-24T02:43:02.264877319Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Nov 24 02:43:04 functional-524458 containerd[3791]: time="2025-11-24T02:43:04.499125904Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:43:04 functional-524458 containerd[3791]: time="2025-11-24T02:43:04.499160833Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Nov 24 02:43:14 functional-524458 containerd[3791]: time="2025-11-24T02:43:14.651509812Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Nov 24 02:43:15 functional-524458 containerd[3791]: time="2025-11-24T02:43:15.286539915Z" level=info msg="container event discarded" container=10d00d9c3b16649b9914c58e1930bf5b81a650c7ecfb9a15666c80e96d779a38 type=CONTAINER_CREATED_EVENT
	Nov 24 02:43:15 functional-524458 containerd[3791]: time="2025-11-24T02:43:15.286614032Z" level=info msg="container event discarded" container=10d00d9c3b16649b9914c58e1930bf5b81a650c7ecfb9a15666c80e96d779a38 type=CONTAINER_STARTED_EVENT
	Nov 24 02:43:16 functional-524458 containerd[3791]: time="2025-11-24T02:43:16.891410610Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:43:16 functional-524458 containerd[3791]: time="2025-11-24T02:43:16.891445787Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10998"
	Nov 24 02:43:17 functional-524458 containerd[3791]: time="2025-11-24T02:43:17.653459116Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Nov 24 02:43:19 functional-524458 containerd[3791]: time="2025-11-24T02:43:19.876354651Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:43:19 functional-524458 containerd[3791]: time="2025-11-24T02:43:19.876378213Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	Nov 24 02:43:20 functional-524458 containerd[3791]: time="2025-11-24T02:43:20.653330816Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 24 02:43:22 functional-524458 containerd[3791]: time="2025-11-24T02:43:22.882978919Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:43:22 functional-524458 containerd[3791]: time="2025-11-24T02:43:22.883012461Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
	Nov 24 02:43:33 functional-524458 containerd[3791]: time="2025-11-24T02:43:33.651953889Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Nov 24 02:43:35 functional-524458 containerd[3791]: time="2025-11-24T02:43:35.883517886Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:43:35 functional-524458 containerd[3791]: time="2025-11-24T02:43:35.883563454Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Nov 24 02:44:17 functional-524458 containerd[3791]: time="2025-11-24T02:44:17.655986885Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Nov 24 02:44:19 functional-524458 containerd[3791]: time="2025-11-24T02:44:19.892875179Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:44:19 functional-524458 containerd[3791]: time="2025-11-24T02:44:19.892900220Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	
	
	==> coredns [727f77f614fb0a0b55f6253486a5a0fde92abd053c3b6f96e7486e7c98748d27] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38803 - 57960 "HINFO IN 5576012265632122714.157634741446322758. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026213698s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ff1f2401e08887269b9ebfd8fd03528e5039f46f83d4774cb9fb801caa36a503] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47722 - 43225 "HINFO IN 3493354150572541863.540786576554480922. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.091381734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-524458
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-524458
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=functional-524458
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T02_30_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 02:30:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-524458
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 02:47:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 02:45:45 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 02:45:45 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 02:45:45 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 02:45:45 +0000   Mon, 24 Nov 2025 02:31:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-524458
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                abdae176-c8fb-4f16-9193-b297c7e2de4f
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8t9t8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-node-connect-7d85dfc575-bm6nh           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-zpp9z                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     8m54s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-vm5lj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-functional-524458                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-z2hwm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-functional-524458              250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-524458     200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-fpnq6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-functional-524458              100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-dbcxf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-88tpq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientPID     16m                kubelet          Node functional-524458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node functional-524458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node functional-524458 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           16m                node-controller  Node functional-524458 event: Registered Node functional-524458 in Controller
	  Normal  NodeReady                16m                kubelet          Node functional-524458 status is now: NodeReady
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-524458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-524458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node functional-524458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node functional-524458 event: Registered Node functional-524458 in Controller
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [2402aa0a440d9433b29a82464bb9b8fc9be1875f342295ed598b56c3c455966c] <==
	{"level":"warn","ts":"2025-11-24T02:31:39.030213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.036498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.042249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.049380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.062954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.069320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.075380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.081771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.087969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.094342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.101674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.107927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.120463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.126657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.132754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.146244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.152525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.159115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.211463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:41:38.734165Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1190}
	{"level":"info","ts":"2025-11-24T02:41:38.753600Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1190,"took":"19.093539ms","hash":3758438164,"current-db-size-bytes":3837952,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1937408,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-11-24T02:41:38.753645Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3758438164,"revision":1190,"compact-revision":-1}
	{"level":"info","ts":"2025-11-24T02:46:38.739300Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1610}
	{"level":"info","ts":"2025-11-24T02:46:38.742744Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1610,"took":"3.129698ms","hash":2444425337,"current-db-size-bytes":3837952,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":2326528,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-11-24T02:46:38.742792Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2444425337,"revision":1610,"compact-revision":1190}
	
	
	==> etcd [9d4e9836cae55bbedae9f6e86b045334f2599454b4d798c441d5ec93f6c930af] <==
	{"level":"warn","ts":"2025-11-24T02:30:41.152953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.163687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.169222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.184769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.191917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.198031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.239271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37594","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:31:35.938040Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T02:31:35.938123Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-524458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T02:31:35.938224Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:35.939802Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:35.939876Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.939895Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-24T02:31:35.939971Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T02:31:35.939975Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T02:31:35.940018Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:35.940028Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T02:31:35.940028Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:35.940046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.940033Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T02:31:35.940027Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T02:31:35.942031Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T02:31:35.942094Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.942124Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T02:31:35.942136Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-524458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:47:09 up 29 min,  0 user,  load average: 0.35, 0.17, 0.26
	Linux functional-524458 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [69ec8eb7d80599bf65d41066aa2aee4949f2acfa4261127f4c76f4644245664c] <==
	I1124 02:45:06.773874       1 main.go:301] handling current node
	I1124 02:45:16.774029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:45:16.774067       1 main.go:301] handling current node
	I1124 02:45:26.766109       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:45:26.766141       1 main.go:301] handling current node
	I1124 02:45:36.768405       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:45:36.768439       1 main.go:301] handling current node
	I1124 02:45:46.765529       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:45:46.765578       1 main.go:301] handling current node
	I1124 02:45:56.768724       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:45:56.768770       1 main.go:301] handling current node
	I1124 02:46:06.770263       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:06.770298       1 main.go:301] handling current node
	I1124 02:46:16.773372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:16.773403       1 main.go:301] handling current node
	I1124 02:46:26.765988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:26.766048       1 main.go:301] handling current node
	I1124 02:46:36.766891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:36.766929       1 main.go:301] handling current node
	I1124 02:46:46.764733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:46.764824       1 main.go:301] handling current node
	I1124 02:46:56.765012       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:56.765056       1 main.go:301] handling current node
	I1124 02:47:06.773115       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:47:06.773147       1 main.go:301] handling current node
	
	
	==> kindnet [a4e33a61af8ccbb5eaccab81c84cbce715cf4d1a4b518dc59c2c36603f42d57b] <==
	I1124 02:30:50.961261       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 02:30:50.961496       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 02:30:50.961632       1 main.go:148] setting mtu 1500 for CNI 
	I1124 02:30:50.961649       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 02:30:50.961677       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T02:30:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 02:30:51.256361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 02:30:51.256419       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 02:30:51.256434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 02:30:51.256582       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 02:30:51.556556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 02:30:51.556585       1 metrics.go:72] Registering metrics
	I1124 02:30:51.556626       1 controller.go:711] "Syncing nftables rules"
	I1124 02:31:01.166865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:01.166931       1 main.go:301] handling current node
	I1124 02:31:11.173260       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:11.173293       1 main.go:301] handling current node
	I1124 02:31:21.165849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:21.165882       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ea021386d9aa8a8c91cb8a7d06750b8bc3c8ae4984b484b36e172d2d51607ca6] <==
	I1124 02:31:39.665039       1 cache.go:39] Caches are synced for autoregister controller
	I1124 02:31:39.665112       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 02:31:39.665156       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 02:31:39.671136       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 02:31:39.687077       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 02:31:39.789384       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 02:31:39.789384       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 02:31:40.567874       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1124 02:31:40.774123       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1124 02:31:40.775592       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 02:31:40.781324       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 02:31:41.502163       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 02:31:41.596463       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 02:31:41.651094       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 02:31:41.658579       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 02:31:42.998188       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 02:32:00.019748       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.68.228"}
	I1124 02:32:03.955744       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.172.10"}
	I1124 02:32:05.906877       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 02:32:06.068465       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.50.142"}
	I1124 02:32:06.082826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.246.74"}
	I1124 02:32:08.388293       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.90.125"}
	I1124 02:37:07.178058       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.160.243"}
	I1124 02:38:14.810943       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.11.161"}
	I1124 02:41:39.600990       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3c37f8c32c41ed1a47767957e0e11d8a45a9a5e520e681dfe0b2c1244b78c872] <==
	I1124 02:31:42.971334       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-524458"
	I1124 02:31:42.971430       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 02:31:42.992772       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 02:31:42.992836       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 02:31:42.992849       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 02:31:42.992943       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 02:31:42.992975       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 02:31:42.993031       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 02:31:42.993071       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 02:31:42.993080       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 02:31:42.993088       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 02:31:42.993194       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 02:31:42.993367       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 02:31:42.993528       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 02:31:42.999458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:31:43.007691       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 02:31:43.010002       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 02:31:43.014358       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 02:32:05.973912       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.983036       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.984232       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.989896       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.991546       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.996597       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.998241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [cabebfa1d5c877bf3cd69d20bc91b4623549e427424bf2698cfa5885285e48a4] <==
	I1124 02:31:27.176289       1 serving.go:386] Generated self-signed cert in-memory
	I1124 02:31:27.518820       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1124 02:31:27.518843       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:27.520250       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1124 02:31:27.520295       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1124 02:31:27.520657       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1124 02:31:27.520685       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 02:31:37.523230       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [35bec471e4f9f67e63e215b9a948fc13ea1c482c87ba3da7c89537d2b954fc21] <==
	I1124 02:31:26.546883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1124 02:31:26.547817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:27.584158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:30.628675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:36.426381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1124 02:31:46.348002       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:31:46.348054       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:31:46.348172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:31:46.370611       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:31:46.370675       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:31:46.376507       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:31:46.376952       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:31:46.376970       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:46.378288       1 config.go:200] "Starting service config controller"
	I1124 02:31:46.378312       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:31:46.378299       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:31:46.378358       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:31:46.378397       1 config.go:309] "Starting node config controller"
	I1124 02:31:46.378411       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:31:46.378436       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:31:46.378441       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:31:46.478518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:31:46.478573       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:31:46.478574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 02:31:46.478599       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [bcaa0dfec6478b7b3d78ebb52e18e001772dbf1fb031bbb5f5aeee3f6f2e047b] <==
	I1124 02:30:50.589509       1 server_linux.go:53] "Using iptables proxy"
	I1124 02:30:50.649864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:30:50.750262       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:30:50.750315       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:30:50.750429       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:30:50.775421       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:30:50.775498       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:30:50.781395       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:30:50.781709       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:30:50.781726       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:30:50.783193       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:30:50.783226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:30:50.783251       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:30:50.783250       1 config.go:200] "Starting service config controller"
	I1124 02:30:50.783257       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:30:50.783263       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:30:50.783275       1 config.go:309] "Starting node config controller"
	I1124 02:30:50.783283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:30:50.783291       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:30:50.884070       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:30:50.884192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:30:50.884259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [011ce34e2a26582eb45146eecd7a13e0eecbbfb40048e91c82e8204398646303] <==
	E1124 02:30:41.648514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:30:41.648522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:30:41.648547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:30:41.648550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:30:41.648654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:30:41.648675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:30:41.648745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:30:41.648760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:42.465005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:30:42.515396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:30:42.523703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:30:42.529861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:30:42.625483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:30:42.628516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:42.666309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:30:42.679452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:30:42.825835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:30:42.900062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1124 02:30:45.044111       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.794834       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 02:31:25.794902       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.795210       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 02:31:25.795233       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 02:31:25.795368       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 02:31:25.795398       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [33d9520aecf65cccdf85716e396cb41948cafd4c476f6950b9b870412cbad9b5] <==
	E1124 02:31:32.107552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:31:32.246428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:31:32.256066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:31:32.279681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:31:32.484611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:34.338665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:31:34.646649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:31:35.195266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:31:35.223753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:31:35.322629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:31:35.443983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:31:35.698518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:31:36.005382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:31:36.379704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:31:36.402500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:36.675413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:31:36.904160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:31:36.909692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:31:37.085287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:31:37.194098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:31:37.273123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:31:37.322954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:31:38.185146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:31:39.583259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1124 02:31:46.047493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 02:46:26 functional-524458 kubelet[4784]: E1124 02:46:26.652740    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-zpp9z" podUID="40b778fd-24dc-49c9-9f5c-abfbc7dfe529"
	Nov 24 02:46:27 functional-524458 kubelet[4784]: E1124 02:46:27.652307    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	Nov 24 02:46:31 functional-524458 kubelet[4784]: E1124 02:46:31.652968    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:46:32 functional-524458 kubelet[4784]: E1124 02:46:32.651149    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:46:36 functional-524458 kubelet[4784]: E1124 02:46:36.652115    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-bm6nh" podUID="dd29dc9d-2fde-4547-87b6-959980d93ebc"
	Nov 24 02:46:37 functional-524458 kubelet[4784]: E1124 02:46:37.652532    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-zpp9z" podUID="40b778fd-24dc-49c9-9f5c-abfbc7dfe529"
	Nov 24 02:46:37 functional-524458 kubelet[4784]: E1124 02:46:37.652574    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:46:39 functional-524458 kubelet[4784]: E1124 02:46:39.651954    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	Nov 24 02:46:39 functional-524458 kubelet[4784]: E1124 02:46:39.652628    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:46:43 functional-524458 kubelet[4784]: E1124 02:46:43.652038    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:46:45 functional-524458 kubelet[4784]: E1124 02:46:45.652217    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:46:48 functional-524458 kubelet[4784]: E1124 02:46:48.651473    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-bm6nh" podUID="dd29dc9d-2fde-4547-87b6-959980d93ebc"
	Nov 24 02:46:49 functional-524458 kubelet[4784]: E1124 02:46:49.652713    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:46:50 functional-524458 kubelet[4784]: E1124 02:46:50.652370    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:46:51 functional-524458 kubelet[4784]: E1124 02:46:51.652116    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	Nov 24 02:46:52 functional-524458 kubelet[4784]: E1124 02:46:52.652631    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-zpp9z" podUID="40b778fd-24dc-49c9-9f5c-abfbc7dfe529"
	Nov 24 02:46:56 functional-524458 kubelet[4784]: E1124 02:46:56.651540    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:46:57 functional-524458 kubelet[4784]: E1124 02:46:57.653156    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:47:02 functional-524458 kubelet[4784]: E1124 02:47:02.651747    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	Nov 24 02:47:02 functional-524458 kubelet[4784]: E1124 02:47:02.652416    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:47:03 functional-524458 kubelet[4784]: E1124 02:47:03.651322    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-bm6nh" podUID="dd29dc9d-2fde-4547-87b6-959980d93ebc"
	Nov 24 02:47:03 functional-524458 kubelet[4784]: E1124 02:47:03.652090    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:47:04 functional-524458 kubelet[4784]: E1124 02:47:04.652763    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-zpp9z" podUID="40b778fd-24dc-49c9-9f5c-abfbc7dfe529"
	Nov 24 02:47:07 functional-524458 kubelet[4784]: E1124 02:47:07.652211    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:47:08 functional-524458 kubelet[4784]: E1124 02:47:08.652160    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	
	
	==> storage-provisioner [1933b021444ba331525fa058f0fd57fefe0a1dc2f1ad2bfe07daf3d4de6d2b40] <==
	I1124 02:31:26.369727       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 02:31:26.371564       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [2d6aad34e22f65a6adde2c1908faa771c84b1c7108f288e562d127f56a671c37] <==
	W1124 02:46:44.756460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:46.760122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:46.765386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:48.768114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:48.771842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:50.775087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:50.780005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:52.783125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:52.787257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:54.791106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:54.795594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:56.799073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:56.804039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:58.807818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:46:58.813034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:00.816477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:00.820322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:02.823423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:02.827456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:04.830765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:04.836171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:06.839720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:06.843606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:08.846431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:08.851073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-524458 -n functional-524458
helpers_test.go:269: (dbg) Run:  kubectl --context functional-524458 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-8t9t8 hello-node-connect-7d85dfc575-bm6nh mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 hello-node-connect-7d85dfc575-bm6nh mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 hello-node-connect-7d85dfc575-bm6nh mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq: exit status 1 (101.480478ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:06 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://f4209b6719c49eceae60f8419a324de5ece633779675ba9d470e3b8d2a06797c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 02:32:13 +0000
	      Finished:     Mon, 24 Nov 2025 02:32:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vglvc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vglvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15m   default-scheduler  Successfully assigned default/busybox-mount to functional-524458
	  Normal  Pulling    15m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     14m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.088s (6.425s including waiting). Image size: 2395207 bytes.
	  Normal  Created    14m   kubelet            Created container: mount-munger
	  Normal  Started    14m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-8t9t8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:03 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l4v7c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l4v7c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  15m   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8t9t8 to functional-524458
	  Warning  Failed     15m   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  11m (x5 over 15m)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   11m (x5 over 15m)  kubelet  Error: ErrImagePull
	  Warning  Failed   11m (x4 over 14m)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  4m56s (x42 over 15m)  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   4m56s (x42 over 15m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-bm6nh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:37:07 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85j9g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-85j9g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age    From               Message
	  ----     ------     ----   ----               -------
	  Normal   Scheduled  10m    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bm6nh to functional-524458
	  Warning  Failed     9m59s  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  7m2s (x5 over 10m)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   7m (x5 over 9m59s)  kubelet  Error: ErrImagePull
	  Warning  Failed   7m (x4 over 9m45s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   4m47s (x20 over 9m59s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m35s (x21 over 9m59s)  kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-zpp9z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:38:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knwdf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-knwdf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m55s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-zpp9z to functional-524458
	  Warning  Failed     7m11s (x3 over 8m52s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  5m40s (x5 over 8m54s)  kubelet  Pulling image "docker.io/mysql:5.7"
	  Warning  Failed   5m37s (x5 over 8m52s)  kubelet  Error: ErrImagePull
	  Warning  Failed   5m37s (x2 over 8m35s)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m42s (x20 over 8m52s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  3m31s (x21 over 8m52s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5g4km (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5g4km:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  15m   default-scheduler  Successfully assigned default/nginx-svc to functional-524458
	  Warning  Failed     14m   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  12m (x5 over 15m)  kubelet  Pulling image "docker.io/nginx:alpine"
	  Warning  Failed   11m (x5 over 14m)  kubelet  Error: ErrImagePull
	  Warning  Failed   11m (x4 over 14m)  kubelet  Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  4m59s (x40 over 14m)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed   4m46s (x41 over 14m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:26 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5dn22 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5dn22:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  14m                default-scheduler  Successfully assigned default/sp-pod to functional-524458
	  Normal   Pulling    11m (x5 over 14m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     11m (x5 over 14m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   11m (x5 over 14m)     kubelet  Error: ErrImagePull
	  Normal   BackOff  4m34s (x41 over 14m)  kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   4m34s (x41 over 14m)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-dbcxf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-88tpq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 hello-node-connect-7d85dfc575-bm6nh mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq: exit status 1
E1124 02:48:09.580933    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.95s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b0317e94-6193-4e3c-8c37-f33a34ca1649] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003484261s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-524458 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-524458 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-524458 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-524458 apply -f testdata/storage-provisioner/pod.yaml
I1124 02:32:26.307186    8429 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9bc43f9e-db29-442d-880b-c8a84389aeec] Pending
helpers_test.go:352: "sp-pod" [9bc43f9e-db29-442d-880b-c8a84389aeec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1124 02:32:27.493098    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:33:08.454967    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:34:30.377201    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-524458 -n functional-524458
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-24 02:38:26.637189059 +0000 UTC m=+851.361542893
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-524458 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-524458 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-524458/192.168.49.2
Start Time:       Mon, 24 Nov 2025 02:32:26 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5dn22 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-5dn22:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-524458
Normal   Pulling    2m48s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m46s (x5 over 5m57s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   2m46s (x5 over 5m57s)  kubelet  Error: ErrImagePull
Warning  Failed   47s (x20 over 5m57s)   kubelet  Error: ImagePullBackOff
Normal   BackOff  32s (x21 over 5m57s)   kubelet  Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-524458 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-524458 logs sp-pod -n default: exit status 1 (68.807708ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-524458 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-524458
helpers_test.go:243: (dbg) docker inspect functional-524458:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34",
	        "Created": "2025-11-24T02:30:28.925146241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40439,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T02:30:28.96111684Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/hosts",
	        "LogPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34-json.log",
	        "Name": "/functional-524458",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-524458:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-524458",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34",
	                "LowerDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-524458",
	                "Source": "/var/lib/docker/volumes/functional-524458/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-524458",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-524458",
	                "name.minikube.sigs.k8s.io": "functional-524458",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b7e0113de159bb7f20d80f3f8f3ea57d04b5854af723c36f353c1401899bee04",
	            "SandboxKey": "/var/run/docker/netns/b7e0113de159",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-524458": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7ce998af76f9bf37a9b0b37e8dc03d8566ef5a726be1278dc8886354dffa2129",
	                    "EndpointID": "e57019316db23c37637b8f4e72b83f56be989c49058967b2c1d7a721d73ffb4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "72:bd:14:22:6d:10",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-524458",
	                        "8f46810d4481"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-524458 -n functional-524458
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 logs -n 25: (1.248435006s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-524458 ssh findmnt -T /mount2                                                                                                                              │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh     │ functional-524458 ssh findmnt -T /mount3                                                                                                                              │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ mount   │ -p functional-524458 --kill=true                                                                                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ license │                                                                                                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh     │ functional-524458 ssh sudo systemctl is-active docker                                                                                                                 │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │                     │
	│ ssh     │ functional-524458 ssh sudo systemctl is-active crio                                                                                                                   │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │                     │
	│ image   │ functional-524458 image load --daemon kicbase/echo-server:functional-524458 --alsologtostderr                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image ls                                                                                                                                            │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image load --daemon kicbase/echo-server:functional-524458 --alsologtostderr                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image ls                                                                                                                                            │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image load --daemon kicbase/echo-server:functional-524458 --alsologtostderr                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image ls                                                                                                                                            │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image save kicbase/echo-server:functional-524458 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image rm kicbase/echo-server:functional-524458 --alsologtostderr                                                                                    │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image ls                                                                                                                                            │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image ls                                                                                                                                            │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image   │ functional-524458 image save --daemon kicbase/echo-server:functional-524458 --alsologtostderr                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh     │ functional-524458 ssh sudo cat /etc/test/nested/copy/8429/hosts                                                                                                       │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh     │ functional-524458 ssh sudo cat /etc/ssl/certs/8429.pem                                                                                                                │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh     │ functional-524458 ssh sudo cat /usr/share/ca-certificates/8429.pem                                                                                                    │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh     │ functional-524458 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                              │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh     │ functional-524458 ssh sudo cat /etc/ssl/certs/84292.pem                                                                                                               │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh     │ functional-524458 ssh sudo cat /usr/share/ca-certificates/84292.pem                                                                                                   │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh     │ functional-524458 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                              │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:32:04
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:32:04.712497   49906 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:04.712948   49906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.712960   49906 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:04.712966   49906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.713312   49906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:32:04.713858   49906 out.go:368] Setting JSON to false
	I1124 02:32:04.715081   49906 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":868,"bootTime":1763950657,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:32:04.715153   49906 start.go:143] virtualization: kvm guest
	I1124 02:32:04.716957   49906 out.go:179] * [functional-524458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:32:04.718285   49906 notify.go:221] Checking for updates...
	I1124 02:32:04.718332   49906 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:32:04.719589   49906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:32:04.720934   49906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:32:04.722032   49906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 02:32:04.723392   49906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:32:04.724722   49906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:32:04.726193   49906 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:32:04.726692   49906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:32:04.751591   49906 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:32:04.751738   49906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.812419   49906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.802268406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.812559   49906 docker.go:319] overlay module found
	I1124 02:32:04.814655   49906 out.go:179] * Using the docker driver based on existing profile
	I1124 02:32:04.815752   49906 start.go:309] selected driver: docker
	I1124 02:32:04.815794   49906 start.go:927] validating driver "docker" against &{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.815939   49906 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:32:04.816055   49906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.889898   49906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.876051797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.890729   49906 cni.go:84] Creating CNI manager for ""
	I1124 02:32:04.890846   49906 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:32:04.890914   49906 start.go:353] cluster config:
	{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.893578   49906 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f4209b6719c49       56cc512116c8f       6 minutes ago       Exited              mount-munger              0                   56e203a84c75b       busybox-mount                               default
	2d6aad34e22f6       6e38f40d628db       6 minutes ago       Running             storage-provisioner       2                   04364561b4944       storage-provisioner                         kube-system
	ea021386d9aa8       c3994bc696102       6 minutes ago       Running             kube-apiserver            0                   f549fcd0e6bde       kube-apiserver-functional-524458            kube-system
	2402aa0a440d9       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   755542b390469       etcd-functional-524458                      kube-system
	3c37f8c32c41e       c80c8dbafe7dd       6 minutes ago       Running             kube-controller-manager   2                   32624cec026c9       kube-controller-manager-functional-524458   kube-system
	727f77f614fb0       52546a367cc9e       7 minutes ago       Running             coredns                   1                   9655c54274a11       coredns-66bc5c9577-vm5lj                    kube-system
	35bec471e4f9f       fc25172553d79       7 minutes ago       Running             kube-proxy                1                   e45eacf9b156d       kube-proxy-fpnq6                            kube-system
	69ec8eb7d8059       409467f978b4a       7 minutes ago       Running             kindnet-cni               1                   9a6c08ca602bb       kindnet-z2hwm                               kube-system
	cabebfa1d5c87       c80c8dbafe7dd       7 minutes ago       Exited              kube-controller-manager   1                   32624cec026c9       kube-controller-manager-functional-524458   kube-system
	33d9520aecf65       7dd6aaa1717ab       7 minutes ago       Running             kube-scheduler            1                   0f09143310ce8       kube-scheduler-functional-524458            kube-system
	1933b021444ba       6e38f40d628db       7 minutes ago       Exited              storage-provisioner       1                   04364561b4944       storage-provisioner                         kube-system
	ff1f2401e0888       52546a367cc9e       7 minutes ago       Exited              coredns                   0                   9655c54274a11       coredns-66bc5c9577-vm5lj                    kube-system
	a4e33a61af8cc       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   9a6c08ca602bb       kindnet-z2hwm                               kube-system
	bcaa0dfec6478       fc25172553d79       7 minutes ago       Exited              kube-proxy                0                   e45eacf9b156d       kube-proxy-fpnq6                            kube-system
	011ce34e2a265       7dd6aaa1717ab       7 minutes ago       Exited              kube-scheduler            0                   0f09143310ce8       kube-scheduler-functional-524458            kube-system
	9d4e9836cae55       5f1f5298c888d       7 minutes ago       Exited              etcd                      0                   755542b390469       etcd-functional-524458                      kube-system
	
	
	==> containerd <==
	Nov 24 02:38:10 functional-524458 containerd[3791]: time="2025-11-24T02:38:10.450687464Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-524458\" returns successfully"
	Nov 24 02:38:10 functional-524458 containerd[3791]: time="2025-11-24T02:38:10.655728109Z" level=info msg="No images store for sha256:021f04dbc9ff5b912b9dce47562a405558aba47d0595b482fc7e3b67ec33eb02"
	Nov 24 02:38:10 functional-524458 containerd[3791]: time="2025-11-24T02:38:10.657081147Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-524458\""
	Nov 24 02:38:10 functional-524458 containerd[3791]: time="2025-11-24T02:38:10.660744816Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 02:38:10 functional-524458 containerd[3791]: time="2025-11-24T02:38:10.661086575Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-524458\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 02:38:11 functional-524458 containerd[3791]: time="2025-11-24T02:38:11.433767316Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-524458\""
	Nov 24 02:38:11 functional-524458 containerd[3791]: time="2025-11-24T02:38:11.435583523Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-524458\""
	Nov 24 02:38:11 functional-524458 containerd[3791]: time="2025-11-24T02:38:11.436609839Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Nov 24 02:38:11 functional-524458 containerd[3791]: time="2025-11-24T02:38:11.442600467Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-524458\" returns successfully"
	Nov 24 02:38:12 functional-524458 containerd[3791]: time="2025-11-24T02:38:12.074567179Z" level=info msg="No images store for sha256:56df1e9d9d130ccade703fd34d6ea95bc835d3bcf12afa33b0c7cf46fcb0071f"
	Nov 24 02:38:12 functional-524458 containerd[3791]: time="2025-11-24T02:38:12.075619465Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-524458\""
	Nov 24 02:38:12 functional-524458 containerd[3791]: time="2025-11-24T02:38:12.079217751Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 02:38:12 functional-524458 containerd[3791]: time="2025-11-24T02:38:12.079502411Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-524458\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 02:38:15 functional-524458 containerd[3791]: time="2025-11-24T02:38:15.173844195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:mysql-5bb876957f-zpp9z,Uid:40b778fd-24dc-49c9-9f5c-abfbc7dfe529,Namespace:default,Attempt:0,}"
	Nov 24 02:38:15 functional-524458 containerd[3791]: time="2025-11-24T02:38:15.212412408Z" level=info msg="connecting to shim 10d00d9c3b16649b9914c58e1930bf5b81a650c7ecfb9a15666c80e96d779a38" address="unix:///run/containerd/s/93439c38bf8362a2d5947a889630b19529973a3c6ba72154906228bc86ad77d5" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 02:38:15 functional-524458 containerd[3791]: time="2025-11-24T02:38:15.276416265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:mysql-5bb876957f-zpp9z,Uid:40b778fd-24dc-49c9-9f5c-abfbc7dfe529,Namespace:default,Attempt:0,} returns sandbox id \"10d00d9c3b16649b9914c58e1930bf5b81a650c7ecfb9a15666c80e96d779a38\""
	Nov 24 02:38:15 functional-524458 containerd[3791]: time="2025-11-24T02:38:15.278330595Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Nov 24 02:38:17 functional-524458 containerd[3791]: time="2025-11-24T02:38:17.517235434Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:38:17 functional-524458 containerd[3791]: time="2025-11-24T02:38:17.517263023Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10966"
	Nov 24 02:38:17 functional-524458 containerd[3791]: time="2025-11-24T02:38:17.518127845Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 24 02:38:19 functional-524458 containerd[3791]: time="2025-11-24T02:38:19.746148258Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:38:19 functional-524458 containerd[3791]: time="2025-11-24T02:38:19.746210697Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Nov 24 02:38:21 functional-524458 containerd[3791]: time="2025-11-24T02:38:21.651823479Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Nov 24 02:38:23 functional-524458 containerd[3791]: time="2025-11-24T02:38:23.886535851Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:38:23 functional-524458 containerd[3791]: time="2025-11-24T02:38:23.886602961Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	
	
	==> coredns [727f77f614fb0a0b55f6253486a5a0fde92abd053c3b6f96e7486e7c98748d27] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38803 - 57960 "HINFO IN 5576012265632122714.157634741446322758. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026213698s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ff1f2401e08887269b9ebfd8fd03528e5039f46f83d4774cb9fb801caa36a503] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47722 - 43225 "HINFO IN 3493354150572541863.540786576554480922. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.091381734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-524458
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-524458
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=functional-524458
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T02_30_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 02:30:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-524458
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 02:38:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 02:32:41 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 02:32:41 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 02:32:41 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 02:32:41 +0000   Mon, 24 Nov 2025 02:31:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-524458
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                abdae176-c8fb-4f16-9193-b297c7e2de4f
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8t9t8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  default                     hello-node-connect-7d85dfc575-bm6nh           0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  default                     mysql-5bb876957f-zpp9z                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     13s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-vm5lj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m38s
	  kube-system                 etcd-functional-524458                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m43s
	  kube-system                 kindnet-z2hwm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m38s
	  kube-system                 kube-apiserver-functional-524458              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 kube-controller-manager-functional-524458     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 kube-proxy-fpnq6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 kube-scheduler-functional-524458              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-dbcxf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-88tpq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m37s                  kube-proxy       
	  Normal  Starting                 6m41s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m43s                  kubelet          Node functional-524458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m43s                  kubelet          Node functional-524458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m43s                  kubelet          Node functional-524458 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m43s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m39s                  node-controller  Node functional-524458 event: Registered Node functional-524458 in Controller
	  Normal  NodeReady                7m26s                  kubelet          Node functional-524458 status is now: NodeReady
	  Normal  Starting                 6m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m50s (x8 over 6m50s)  kubelet          Node functional-524458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m50s (x8 over 6m50s)  kubelet          Node functional-524458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m50s (x7 over 6m50s)  kubelet          Node functional-524458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m45s                  node-controller  Node functional-524458 event: Registered Node functional-524458 in Controller
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [2402aa0a440d9433b29a82464bb9b8fc9be1875f342295ed598b56c3c455966c] <==
	{"level":"warn","ts":"2025-11-24T02:31:38.981980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:38.988596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.003701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.011607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.017725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.024097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.030213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.036498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.042249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.049380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.062954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.069320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.075380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.081771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.087969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.094342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.101674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.107927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.120463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.126657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.132754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.146244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.152525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.159115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.211463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	
	
	==> etcd [9d4e9836cae55bbedae9f6e86b045334f2599454b4d798c441d5ec93f6c930af] <==
	{"level":"warn","ts":"2025-11-24T02:30:41.152953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.163687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.169222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.184769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.191917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.198031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.239271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37594","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:31:35.938040Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T02:31:35.938123Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-524458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T02:31:35.938224Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:35.939802Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:35.939876Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.939895Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-24T02:31:35.939971Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T02:31:35.939975Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T02:31:35.940018Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:35.940028Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T02:31:35.940028Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:35.940046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.940033Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T02:31:35.940027Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T02:31:35.942031Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T02:31:35.942094Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.942124Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T02:31:35.942136Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-524458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:38:27 up 20 min,  0 user,  load average: 0.21, 0.34, 0.38
	Linux functional-524458 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [69ec8eb7d80599bf65d41066aa2aee4949f2acfa4261127f4c76f4644245664c] <==
	I1124 02:36:26.765837       1 main.go:301] handling current node
	I1124 02:36:36.764560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:36:36.764598       1 main.go:301] handling current node
	I1124 02:36:46.764541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:36:46.764573       1 main.go:301] handling current node
	I1124 02:36:56.766106       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:36:56.766143       1 main.go:301] handling current node
	I1124 02:37:06.764471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:37:06.764508       1 main.go:301] handling current node
	I1124 02:37:16.767482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:37:16.767516       1 main.go:301] handling current node
	I1124 02:37:26.768373       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:37:26.768411       1 main.go:301] handling current node
	I1124 02:37:36.765055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:37:36.765098       1 main.go:301] handling current node
	I1124 02:37:46.767468       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:37:46.767512       1 main.go:301] handling current node
	I1124 02:37:56.765655       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:37:56.765695       1 main.go:301] handling current node
	I1124 02:38:06.764580       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:38:06.764613       1 main.go:301] handling current node
	I1124 02:38:16.765520       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:38:16.765559       1 main.go:301] handling current node
	I1124 02:38:26.766217       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:38:26.766260       1 main.go:301] handling current node
	
	
	==> kindnet [a4e33a61af8ccbb5eaccab81c84cbce715cf4d1a4b518dc59c2c36603f42d57b] <==
	I1124 02:30:50.961261       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 02:30:50.961496       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 02:30:50.961632       1 main.go:148] setting mtu 1500 for CNI 
	I1124 02:30:50.961649       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 02:30:50.961677       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T02:30:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 02:30:51.256361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 02:30:51.256419       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 02:30:51.256434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 02:30:51.256582       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 02:30:51.556556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 02:30:51.556585       1 metrics.go:72] Registering metrics
	I1124 02:30:51.556626       1 controller.go:711] "Syncing nftables rules"
	I1124 02:31:01.166865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:01.166931       1 main.go:301] handling current node
	I1124 02:31:11.173260       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:11.173293       1 main.go:301] handling current node
	I1124 02:31:21.165849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:21.165882       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ea021386d9aa8a8c91cb8a7d06750b8bc3c8ae4984b484b36e172d2d51607ca6] <==
	I1124 02:31:39.665033       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 02:31:39.665039       1 cache.go:39] Caches are synced for autoregister controller
	I1124 02:31:39.665112       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 02:31:39.665156       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 02:31:39.671136       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 02:31:39.687077       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 02:31:39.789384       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 02:31:39.789384       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 02:31:40.567874       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1124 02:31:40.774123       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1124 02:31:40.775592       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 02:31:40.781324       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 02:31:41.502163       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 02:31:41.596463       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 02:31:41.651094       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 02:31:41.658579       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 02:31:42.998188       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 02:32:00.019748       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.68.228"}
	I1124 02:32:03.955744       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.172.10"}
	I1124 02:32:05.906877       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 02:32:06.068465       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.50.142"}
	I1124 02:32:06.082826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.246.74"}
	I1124 02:32:08.388293       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.90.125"}
	I1124 02:37:07.178058       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.160.243"}
	I1124 02:38:14.810943       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.11.161"}
	
	
	==> kube-controller-manager [3c37f8c32c41ed1a47767957e0e11d8a45a9a5e520e681dfe0b2c1244b78c872] <==
	I1124 02:31:42.971334       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-524458"
	I1124 02:31:42.971430       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 02:31:42.992772       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 02:31:42.992836       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 02:31:42.992849       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 02:31:42.992943       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 02:31:42.992975       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 02:31:42.993031       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 02:31:42.993071       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 02:31:42.993080       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 02:31:42.993088       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 02:31:42.993194       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 02:31:42.993367       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 02:31:42.993528       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 02:31:42.999458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:31:43.007691       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 02:31:43.010002       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 02:31:43.014358       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 02:32:05.973912       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.983036       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.984232       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.989896       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.991546       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.996597       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.998241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [cabebfa1d5c877bf3cd69d20bc91b4623549e427424bf2698cfa5885285e48a4] <==
	I1124 02:31:27.176289       1 serving.go:386] Generated self-signed cert in-memory
	I1124 02:31:27.518820       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1124 02:31:27.518843       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:27.520250       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1124 02:31:27.520295       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1124 02:31:27.520657       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1124 02:31:27.520685       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 02:31:37.523230       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [35bec471e4f9f67e63e215b9a948fc13ea1c482c87ba3da7c89537d2b954fc21] <==
	I1124 02:31:26.546883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1124 02:31:26.547817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:27.584158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:30.628675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:36.426381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1124 02:31:46.348002       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:31:46.348054       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:31:46.348172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:31:46.370611       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:31:46.370675       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:31:46.376507       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:31:46.376952       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:31:46.376970       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:46.378288       1 config.go:200] "Starting service config controller"
	I1124 02:31:46.378312       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:31:46.378299       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:31:46.378358       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:31:46.378397       1 config.go:309] "Starting node config controller"
	I1124 02:31:46.378411       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:31:46.378436       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:31:46.378441       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:31:46.478518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:31:46.478573       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:31:46.478574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 02:31:46.478599       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [bcaa0dfec6478b7b3d78ebb52e18e001772dbf1fb031bbb5f5aeee3f6f2e047b] <==
	I1124 02:30:50.589509       1 server_linux.go:53] "Using iptables proxy"
	I1124 02:30:50.649864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:30:50.750262       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:30:50.750315       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:30:50.750429       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:30:50.775421       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:30:50.775498       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:30:50.781395       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:30:50.781709       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:30:50.781726       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:30:50.783193       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:30:50.783226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:30:50.783251       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:30:50.783250       1 config.go:200] "Starting service config controller"
	I1124 02:30:50.783257       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:30:50.783263       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:30:50.783275       1 config.go:309] "Starting node config controller"
	I1124 02:30:50.783283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:30:50.783291       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:30:50.884070       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:30:50.884192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:30:50.884259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [011ce34e2a26582eb45146eecd7a13e0eecbbfb40048e91c82e8204398646303] <==
	E1124 02:30:41.648514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:30:41.648522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:30:41.648547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:30:41.648550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:30:41.648654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:30:41.648675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:30:41.648745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:30:41.648760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:42.465005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:30:42.515396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:30:42.523703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:30:42.529861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:30:42.625483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:30:42.628516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:42.666309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:30:42.679452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:30:42.825835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:30:42.900062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1124 02:30:45.044111       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.794834       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 02:31:25.794902       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.795210       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 02:31:25.795233       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 02:31:25.795368       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 02:31:25.795398       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [33d9520aecf65cccdf85716e396cb41948cafd4c476f6950b9b870412cbad9b5] <==
	E1124 02:31:32.107552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:31:32.246428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:31:32.256066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:31:32.279681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:31:32.484611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:34.338665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:31:34.646649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:31:35.195266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:31:35.223753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:31:35.322629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:31:35.443983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:31:35.698518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:31:36.005382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:31:36.379704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:31:36.402500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:36.675413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:31:36.904160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:31:36.909692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:31:37.085287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:31:37.194098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:31:37.273123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:31:37.322954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:31:38.185146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:31:39.583259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1124 02:31:46.047493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 02:38:19 functional-524458 kubelet[4784]:         failed to pull and unpack image "docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests
	Nov 24 02:38:19 functional-524458 kubelet[4784]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Nov 24 02:38:19 functional-524458 kubelet[4784]:  > image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 24 02:38:19 functional-524458 kubelet[4784]: E1124 02:38:19.746762    4784 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Nov 24 02:38:19 functional-524458 kubelet[4784]:         container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf_kubernetes-dashboard(d5f11dfa-2d20-453d-87e6-0855f65e82b0): ErrImagePull: failed to pull and unpack image "docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests
	Nov 24 02:38:19 functional-524458 kubelet[4784]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Nov 24 02:38:19 functional-524458 kubelet[4784]:  > logger="UnhandledError"
	Nov 24 02:38:19 functional-524458 kubelet[4784]: E1124 02:38:19.746839    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f65e82b0"
	Nov 24 02:38:21 functional-524458 kubelet[4784]: E1124 02:38:21.651655    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-bm6nh" podUID="dd29dc9d-2fde-4547-87b6-959980d93ebc"
	Nov 24 02:38:21 functional-524458 kubelet[4784]: E1124 02:38:21.652151    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:38:23 functional-524458 kubelet[4784]: E1124 02:38:23.652727    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:38:23 functional-524458 kubelet[4784]: E1124 02:38:23.886911    4784 log.go:32] "PullImage from image service failed" err=<
	Nov 24 02:38:23 functional-524458 kubelet[4784]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	Nov 24 02:38:23 functional-524458 kubelet[4784]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Nov 24 02:38:23 functional-524458 kubelet[4784]:  > image="docker.io/nginx:latest"
	Nov 24 02:38:23 functional-524458 kubelet[4784]: E1124 02:38:23.886963    4784 kuberuntime_image.go:43] "Failed to pull image" err=<
	Nov 24 02:38:23 functional-524458 kubelet[4784]:         failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	Nov 24 02:38:23 functional-524458 kubelet[4784]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Nov 24 02:38:23 functional-524458 kubelet[4784]:  > image="docker.io/nginx:latest"
	Nov 24 02:38:23 functional-524458 kubelet[4784]: E1124 02:38:23.887061    4784 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Nov 24 02:38:23 functional-524458 kubelet[4784]:         container myfrontend start failed in pod sp-pod_default(9bc43f9e-db29-442d-880b-c8a84389aeec): ErrImagePull: failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	Nov 24 02:38:23 functional-524458 kubelet[4784]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Nov 24 02:38:23 functional-524458 kubelet[4784]:  > logger="UnhandledError"
	Nov 24 02:38:23 functional-524458 kubelet[4784]: E1124 02:38:23.887091    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:38:27 functional-524458 kubelet[4784]: E1124 02:38:27.657621    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	
	
	==> storage-provisioner [1933b021444ba331525fa058f0fd57fefe0a1dc2f1ad2bfe07daf3d4de6d2b40] <==
	I1124 02:31:26.369727       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 02:31:26.371564       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [2d6aad34e22f65a6adde2c1908faa771c84b1c7108f288e562d127f56a671c37] <==
	W1124 02:38:02.793347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:04.796368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:04.800408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:06.803750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:06.807551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:08.810664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:08.814204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:10.817478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:10.822090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:12.825317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:12.829133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:14.831593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:14.835271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:16.837920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:16.841902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:18.844858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:18.848732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:20.851955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:20.856935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:22.859943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:22.863809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:24.866693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:24.871761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:26.875114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:38:26.879333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-524458 -n functional-524458
helpers_test.go:269: (dbg) Run:  kubectl --context functional-524458 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-8t9t8 hello-node-connect-7d85dfc575-bm6nh mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 hello-node-connect-7d85dfc575-bm6nh mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 hello-node-connect-7d85dfc575-bm6nh mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq: exit status 1 (97.243179ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:06 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://f4209b6719c49eceae60f8419a324de5ece633779675ba9d470e3b8d2a06797c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 02:32:13 +0000
	      Finished:     Mon, 24 Nov 2025 02:32:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vglvc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vglvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m22s  default-scheduler  Successfully assigned default/busybox-mount to functional-524458
	  Normal  Pulling    6m21s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m15s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.088s (6.425s including waiting). Image size: 2395207 bytes.
	  Normal  Created    6m15s  kubelet            Created container: mount-munger
	  Normal  Started    6m15s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-8t9t8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:03 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l4v7c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l4v7c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age    From               Message
	  ----     ------     ----   ----               -------
	  Normal   Scheduled  6m25s  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8t9t8 to functional-524458
	  Warning  Failed     6m22s  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  3m16s (x5 over 6m24s)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   3m14s (x5 over 6m22s)  kubelet  Error: ErrImagePull
	  Warning  Failed   3m14s (x4 over 6m7s)   kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   68s (x20 over 6m21s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  54s (x21 over 6m21s)  kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-bm6nh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:37:07 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85j9g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-85j9g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  81s   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bm6nh to functional-524458
	  Warning  Failed     78s   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  37s (x3 over 81s)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   35s (x3 over 78s)  kubelet  Error: ErrImagePull
	  Warning  Failed   35s (x2 over 64s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  7s (x4 over 78s)  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   7s (x4 over 78s)  kubelet  Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-zpp9z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:38:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knwdf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-knwdf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  14s   default-scheduler  Successfully assigned default/mysql-5bb876957f-zpp9z to functional-524458
	  Normal   Pulling    13s   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     11s   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   11s  kubelet  Error: ErrImagePull
	  Normal   BackOff  11s  kubelet  Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed   11s  kubelet  Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5g4km (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5g4km:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age    From               Message
	  ----     ------     ----   ----               -------
	  Normal   Scheduled  6m20s  default-scheduler  Successfully assigned default/nginx-svc to functional-524458
	  Warning  Failed     6m12s  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  3m19s (x5 over 6m20s)  kubelet  Pulling image "docker.io/nginx:alpine"
	  Warning  Failed   3m17s (x5 over 6m12s)  kubelet  Error: ErrImagePull
	  Warning  Failed   3m17s (x4 over 5m56s)  kubelet  Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   67s (x19 over 6m12s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  40s (x21 over 6m12s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:26 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5dn22 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5dn22:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/sp-pod to functional-524458
	  Normal   Pulling    2m50s (x5 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m48s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   2m48s (x5 over 5m59s)  kubelet  Error: ErrImagePull
	  Warning  Failed   49s (x20 over 5m59s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  34s (x21 over 5m59s)   kubelet  Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-dbcxf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-88tpq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 hello-node-connect-7d85dfc575-bm6nh mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-524458 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-zpp9z" [40b778fd-24dc-49c9-9f5c-abfbc7dfe529] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-524458 -n functional-524458
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-11-24 02:48:15.174141993 +0000 UTC m=+1439.898495810
functional_test.go:1804: (dbg) Run:  kubectl --context functional-524458 describe po mysql-5bb876957f-zpp9z -n default
functional_test.go:1804: (dbg) kubectl --context functional-524458 describe po mysql-5bb876957f-zpp9z -n default:
Name:             mysql-5bb876957f-zpp9z
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-524458/192.168.49.2
Start Time:       Mon, 24 Nov 2025 02:38:14 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knwdf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-knwdf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-zpp9z to functional-524458
Warning  Failed     8m17s (x3 over 9m58s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  6m46s (x5 over 10m)    kubelet  Pulling image "docker.io/mysql:5.7"
Warning  Failed   6m43s (x5 over 9m58s)  kubelet  Error: ErrImagePull
Warning  Failed   6m43s (x2 over 9m41s)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   4m48s (x20 over 9m58s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m37s (x21 over 9m58s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-524458 logs mysql-5bb876957f-zpp9z -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-524458 logs mysql-5bb876957f-zpp9z -n default: exit status 1 (59.618965ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-zpp9z" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-524458 logs mysql-5bb876957f-zpp9z -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-524458
helpers_test.go:243: (dbg) docker inspect functional-524458:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34",
	        "Created": "2025-11-24T02:30:28.925146241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40439,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T02:30:28.96111684Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/hosts",
	        "LogPath": "/var/lib/docker/containers/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34/8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34-json.log",
	        "Name": "/functional-524458",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-524458:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-524458",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f46810d4481b383c4e8cec7bd9923cb30aff4f78d21f34aa8b6c51265c76f34",
	                "LowerDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a7605a946befaee6b3381f12011e05152d00cc270917c19acb451c13949e7f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-524458",
	                "Source": "/var/lib/docker/volumes/functional-524458/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-524458",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-524458",
	                "name.minikube.sigs.k8s.io": "functional-524458",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b7e0113de159bb7f20d80f3f8f3ea57d04b5854af723c36f353c1401899bee04",
	            "SandboxKey": "/var/run/docker/netns/b7e0113de159",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-524458": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7ce998af76f9bf37a9b0b37e8dc03d8566ef5a726be1278dc8886354dffa2129",
	                    "EndpointID": "e57019316db23c37637b8f4e72b83f56be989c49058967b2c1d7a721d73ffb4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "72:bd:14:22:6d:10",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-524458",
	                        "8f46810d4481"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-524458 -n functional-524458
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 logs -n 25: (1.248255738s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                              ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-524458 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls                                                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image save --daemon kicbase/echo-server:functional-524458 --alsologtostderr                                   │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/test/nested/copy/8429/hosts                                                                 │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/ssl/certs/8429.pem                                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /usr/share/ca-certificates/8429.pem                                                              │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/ssl/certs/51391683.0                                                                        │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/ssl/certs/84292.pem                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /usr/share/ca-certificates/84292.pem                                                             │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                        │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls --format short --alsologtostderr                                                                     │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls --format yaml --alsologtostderr                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ ssh            │ functional-524458 ssh pgrep buildkitd                                                                                           │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │                     │
	│ image          │ functional-524458 image build -t localhost/my-image:functional-524458 testdata/build --alsologtostderr                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls                                                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls --format json --alsologtostderr                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ image          │ functional-524458 image ls --format table --alsologtostderr                                                                     │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ update-context │ functional-524458 update-context --alsologtostderr -v=2                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ update-context │ functional-524458 update-context --alsologtostderr -v=2                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ update-context │ functional-524458 update-context --alsologtostderr -v=2                                                                         │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:38 UTC │ 24 Nov 25 02:38 UTC │
	│ service        │ functional-524458 service list                                                                                                  │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │ 24 Nov 25 02:42 UTC │
	│ service        │ functional-524458 service list -o json                                                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │ 24 Nov 25 02:42 UTC │
	│ service        │ functional-524458 service --namespace=default --https --url hello-node                                                          │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │                     │
	│ service        │ functional-524458 service hello-node --url --format={{.IP}}                                                                     │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │                     │
	│ service        │ functional-524458 service hello-node --url                                                                                      │ functional-524458 │ jenkins │ v1.37.0 │ 24 Nov 25 02:42 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:32:04
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:32:04.712497   49906 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:04.712948   49906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.712960   49906 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:04.712966   49906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.713312   49906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:32:04.713858   49906 out.go:368] Setting JSON to false
	I1124 02:32:04.715081   49906 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":868,"bootTime":1763950657,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:32:04.715153   49906 start.go:143] virtualization: kvm guest
	I1124 02:32:04.716957   49906 out.go:179] * [functional-524458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:32:04.718285   49906 notify.go:221] Checking for updates...
	I1124 02:32:04.718332   49906 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:32:04.719589   49906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:32:04.720934   49906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:32:04.722032   49906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 02:32:04.723392   49906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:32:04.724722   49906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:32:04.726193   49906 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:32:04.726692   49906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:32:04.751591   49906 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:32:04.751738   49906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.812419   49906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.802268406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.812559   49906 docker.go:319] overlay module found
	I1124 02:32:04.814655   49906 out.go:179] * Using the docker driver based on existing profile
	I1124 02:32:04.815752   49906 start.go:309] selected driver: docker
	I1124 02:32:04.815794   49906 start.go:927] validating driver "docker" against &{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.815939   49906 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:32:04.816055   49906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.889898   49906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.876051797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.890729   49906 cni.go:84] Creating CNI manager for ""
	I1124 02:32:04.890846   49906 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:32:04.890914   49906 start.go:353] cluster config:
	{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.893578   49906 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	df34d7f841e73       9056ab77afb8e       11 seconds ago      Running             echo-server               0                   aa1534120d2fa       hello-node-connect-7d85dfc575-bm6nh         default
	f4209b6719c49       56cc512116c8f       16 minutes ago      Exited              mount-munger              0                   56e203a84c75b       busybox-mount                               default
	2d6aad34e22f6       6e38f40d628db       16 minutes ago      Running             storage-provisioner       2                   04364561b4944       storage-provisioner                         kube-system
	ea021386d9aa8       c3994bc696102       16 minutes ago      Running             kube-apiserver            0                   f549fcd0e6bde       kube-apiserver-functional-524458            kube-system
	2402aa0a440d9       5f1f5298c888d       16 minutes ago      Running             etcd                      1                   755542b390469       etcd-functional-524458                      kube-system
	3c37f8c32c41e       c80c8dbafe7dd       16 minutes ago      Running             kube-controller-manager   2                   32624cec026c9       kube-controller-manager-functional-524458   kube-system
	727f77f614fb0       52546a367cc9e       16 minutes ago      Running             coredns                   1                   9655c54274a11       coredns-66bc5c9577-vm5lj                    kube-system
	35bec471e4f9f       fc25172553d79       16 minutes ago      Running             kube-proxy                1                   e45eacf9b156d       kube-proxy-fpnq6                            kube-system
	69ec8eb7d8059       409467f978b4a       16 minutes ago      Running             kindnet-cni               1                   9a6c08ca602bb       kindnet-z2hwm                               kube-system
	cabebfa1d5c87       c80c8dbafe7dd       16 minutes ago      Exited              kube-controller-manager   1                   32624cec026c9       kube-controller-manager-functional-524458   kube-system
	33d9520aecf65       7dd6aaa1717ab       16 minutes ago      Running             kube-scheduler            1                   0f09143310ce8       kube-scheduler-functional-524458            kube-system
	1933b021444ba       6e38f40d628db       16 minutes ago      Exited              storage-provisioner       1                   04364561b4944       storage-provisioner                         kube-system
	ff1f2401e0888       52546a367cc9e       17 minutes ago      Exited              coredns                   0                   9655c54274a11       coredns-66bc5c9577-vm5lj                    kube-system
	a4e33a61af8cc       409467f978b4a       17 minutes ago      Exited              kindnet-cni               0                   9a6c08ca602bb       kindnet-z2hwm                               kube-system
	bcaa0dfec6478       fc25172553d79       17 minutes ago      Exited              kube-proxy                0                   e45eacf9b156d       kube-proxy-fpnq6                            kube-system
	011ce34e2a265       7dd6aaa1717ab       17 minutes ago      Exited              kube-scheduler            0                   0f09143310ce8       kube-scheduler-functional-524458            kube-system
	9d4e9836cae55       5f1f5298c888d       17 minutes ago      Exited              etcd                      0                   755542b390469       etcd-functional-524458                      kube-system
	
	
	==> containerd <==
	Nov 24 02:43:20 functional-524458 containerd[3791]: time="2025-11-24T02:43:20.653330816Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 24 02:43:22 functional-524458 containerd[3791]: time="2025-11-24T02:43:22.882978919Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:43:22 functional-524458 containerd[3791]: time="2025-11-24T02:43:22.883012461Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
	Nov 24 02:43:33 functional-524458 containerd[3791]: time="2025-11-24T02:43:33.651953889Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Nov 24 02:43:35 functional-524458 containerd[3791]: time="2025-11-24T02:43:35.883517886Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:43:35 functional-524458 containerd[3791]: time="2025-11-24T02:43:35.883563454Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Nov 24 02:44:17 functional-524458 containerd[3791]: time="2025-11-24T02:44:17.655986885Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Nov 24 02:44:19 functional-524458 containerd[3791]: time="2025-11-24T02:44:19.892875179Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:44:19 functional-524458 containerd[3791]: time="2025-11-24T02:44:19.892900220Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Nov 24 02:48:03 functional-524458 containerd[3791]: time="2025-11-24T02:48:03.655517561Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.181910338Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.182580497Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=12114"
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.183918197Z" level=info msg="ImageUpdate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.185746588Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.186149425Z" level=info msg="Pulled image \"kicbase/echo-server:latest\" with image id \"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\", repo tag \"docker.io/kicbase/echo-server:latest\", repo digest \"docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6\", size \"2138418\" in 1.530571228s"
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.186181735Z" level=info msg="PullImage \"kicbase/echo-server:latest\" returns image reference \"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.190565732Z" level=info msg="CreateContainer within sandbox \"aa1534120d2fa6f9e3ace2db37100b43c192c24113447d39b0aba66e324a15ed\" for container &ContainerMetadata{Name:echo-server,Attempt:0,}"
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.195599772Z" level=info msg="Container df34d7f841e73b4e5475e25790004dd954690b8d9660523790ed7d0f8d156970: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.200665401Z" level=info msg="CreateContainer within sandbox \"aa1534120d2fa6f9e3ace2db37100b43c192c24113447d39b0aba66e324a15ed\" for &ContainerMetadata{Name:echo-server,Attempt:0,} returns container id \"df34d7f841e73b4e5475e25790004dd954690b8d9660523790ed7d0f8d156970\""
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.201238252Z" level=info msg="StartContainer for \"df34d7f841e73b4e5475e25790004dd954690b8d9660523790ed7d0f8d156970\""
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.201958224Z" level=info msg="connecting to shim df34d7f841e73b4e5475e25790004dd954690b8d9660523790ed7d0f8d156970" address="unix:///run/containerd/s/80c74cba7fafa737753e3904bb392408893d96a386b33401fe7a660e430ce2ce" protocol=ttrpc version=3
	Nov 24 02:48:05 functional-524458 containerd[3791]: time="2025-11-24T02:48:05.249772130Z" level=info msg="StartContainer for \"df34d7f841e73b4e5475e25790004dd954690b8d9660523790ed7d0f8d156970\" returns successfully"
	Nov 24 02:48:07 functional-524458 containerd[3791]: time="2025-11-24T02:48:07.652976929Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Nov 24 02:48:09 functional-524458 containerd[3791]: time="2025-11-24T02:48:09.887948925Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 24 02:48:09 functional-524458 containerd[3791]: time="2025-11-24T02:48:09.887983460Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [727f77f614fb0a0b55f6253486a5a0fde92abd053c3b6f96e7486e7c98748d27] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38803 - 57960 "HINFO IN 5576012265632122714.157634741446322758. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026213698s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ff1f2401e08887269b9ebfd8fd03528e5039f46f83d4774cb9fb801caa36a503] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47722 - 43225 "HINFO IN 3493354150572541863.540786576554480922. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.091381734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-524458
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-524458
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=functional-524458
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T02_30_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 02:30:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-524458
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 02:48:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 02:48:08 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 02:48:08 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 02:48:08 +0000   Mon, 24 Nov 2025 02:30:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 02:48:08 +0000   Mon, 24 Nov 2025 02:31:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-524458
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                abdae176-c8fb-4f16-9193-b297c7e2de4f
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8t9t8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     hello-node-connect-7d85dfc575-bm6nh           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     mysql-5bb876957f-zpp9z                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-vm5lj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 etcd-functional-524458                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-z2hwm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-524458              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-524458     200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-fpnq6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-524458              100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-dbcxf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-88tpq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientPID     17m                kubelet          Node functional-524458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node functional-524458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node functional-524458 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           17m                node-controller  Node functional-524458 event: Registered Node functional-524458 in Controller
	  Normal  NodeReady                17m                kubelet          Node functional-524458 status is now: NodeReady
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-524458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-524458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-524458 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node functional-524458 event: Registered Node functional-524458 in Controller
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [2402aa0a440d9433b29a82464bb9b8fc9be1875f342295ed598b56c3c455966c] <==
	{"level":"warn","ts":"2025-11-24T02:31:39.030213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.036498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.042249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.049380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.062954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.069320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.075380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.081771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.087969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.094342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.101674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.107927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.120463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.126657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.132754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.146244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.152525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.159115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.211463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:41:38.734165Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1190}
	{"level":"info","ts":"2025-11-24T02:41:38.753600Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1190,"took":"19.093539ms","hash":3758438164,"current-db-size-bytes":3837952,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1937408,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-11-24T02:41:38.753645Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3758438164,"revision":1190,"compact-revision":-1}
	{"level":"info","ts":"2025-11-24T02:46:38.739300Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1610}
	{"level":"info","ts":"2025-11-24T02:46:38.742744Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1610,"took":"3.129698ms","hash":2444425337,"current-db-size-bytes":3837952,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":2326528,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-11-24T02:46:38.742792Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2444425337,"revision":1610,"compact-revision":1190}
	
	
	==> etcd [9d4e9836cae55bbedae9f6e86b045334f2599454b4d798c441d5ec93f6c930af] <==
	{"level":"warn","ts":"2025-11-24T02:30:41.152953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.163687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.169222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.184769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.191917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.198031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:41.239271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37594","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:31:35.938040Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T02:31:35.938123Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-524458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T02:31:35.938224Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:35.939802Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:35.939876Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.939895Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-24T02:31:35.939971Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T02:31:35.939975Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T02:31:35.940018Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:35.940028Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T02:31:35.940028Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:35.940046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.940033Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T02:31:35.940027Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T02:31:35.942031Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T02:31:35.942094Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:35.942124Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T02:31:35.942136Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-524458","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:48:16 up 30 min,  0 user,  load average: 0.26, 0.17, 0.25
	Linux functional-524458 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [69ec8eb7d80599bf65d41066aa2aee4949f2acfa4261127f4c76f4644245664c] <==
	I1124 02:46:06.770298       1 main.go:301] handling current node
	I1124 02:46:16.773372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:16.773403       1 main.go:301] handling current node
	I1124 02:46:26.765988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:26.766048       1 main.go:301] handling current node
	I1124 02:46:36.766891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:36.766929       1 main.go:301] handling current node
	I1124 02:46:46.764733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:46.764824       1 main.go:301] handling current node
	I1124 02:46:56.765012       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:46:56.765056       1 main.go:301] handling current node
	I1124 02:47:06.773115       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:47:06.773147       1 main.go:301] handling current node
	I1124 02:47:16.764883       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:47:16.764914       1 main.go:301] handling current node
	I1124 02:47:26.766442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:47:26.766484       1 main.go:301] handling current node
	I1124 02:47:36.774252       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:47:36.774294       1 main.go:301] handling current node
	I1124 02:47:46.770949       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:47:46.771008       1 main.go:301] handling current node
	I1124 02:47:56.766936       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:47:56.766972       1 main.go:301] handling current node
	I1124 02:48:06.765287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:48:06.765341       1 main.go:301] handling current node
	
	
	==> kindnet [a4e33a61af8ccbb5eaccab81c84cbce715cf4d1a4b518dc59c2c36603f42d57b] <==
	I1124 02:30:50.961261       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 02:30:50.961496       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 02:30:50.961632       1 main.go:148] setting mtu 1500 for CNI 
	I1124 02:30:50.961649       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 02:30:50.961677       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T02:30:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 02:30:51.256361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 02:30:51.256419       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 02:30:51.256434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 02:30:51.256582       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 02:30:51.556556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 02:30:51.556585       1 metrics.go:72] Registering metrics
	I1124 02:30:51.556626       1 controller.go:711] "Syncing nftables rules"
	I1124 02:31:01.166865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:01.166931       1 main.go:301] handling current node
	I1124 02:31:11.173260       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:11.173293       1 main.go:301] handling current node
	I1124 02:31:21.165849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:21.165882       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ea021386d9aa8a8c91cb8a7d06750b8bc3c8ae4984b484b36e172d2d51607ca6] <==
	I1124 02:31:39.665039       1 cache.go:39] Caches are synced for autoregister controller
	I1124 02:31:39.665112       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 02:31:39.665156       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 02:31:39.671136       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 02:31:39.687077       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 02:31:39.789384       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 02:31:39.789384       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 02:31:40.567874       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1124 02:31:40.774123       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1124 02:31:40.775592       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 02:31:40.781324       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 02:31:41.502163       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 02:31:41.596463       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 02:31:41.651094       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 02:31:41.658579       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 02:31:42.998188       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 02:32:00.019748       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.68.228"}
	I1124 02:32:03.955744       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.172.10"}
	I1124 02:32:05.906877       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 02:32:06.068465       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.50.142"}
	I1124 02:32:06.082826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.246.74"}
	I1124 02:32:08.388293       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.90.125"}
	I1124 02:37:07.178058       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.160.243"}
	I1124 02:38:14.810943       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.11.161"}
	I1124 02:41:39.600990       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3c37f8c32c41ed1a47767957e0e11d8a45a9a5e520e681dfe0b2c1244b78c872] <==
	I1124 02:31:42.971334       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-524458"
	I1124 02:31:42.971430       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 02:31:42.992772       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 02:31:42.992836       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 02:31:42.992849       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 02:31:42.992943       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 02:31:42.992975       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 02:31:42.993031       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 02:31:42.993071       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 02:31:42.993080       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 02:31:42.993088       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 02:31:42.993194       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 02:31:42.993367       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 02:31:42.993528       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 02:31:42.999458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:31:43.007691       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 02:31:43.010002       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 02:31:43.014358       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 02:32:05.973912       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.983036       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.984232       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.989896       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.991546       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.996597       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:05.998241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [cabebfa1d5c877bf3cd69d20bc91b4623549e427424bf2698cfa5885285e48a4] <==
	I1124 02:31:27.176289       1 serving.go:386] Generated self-signed cert in-memory
	I1124 02:31:27.518820       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1124 02:31:27.518843       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:27.520250       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1124 02:31:27.520295       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1124 02:31:27.520657       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1124 02:31:27.520685       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 02:31:37.523230       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [35bec471e4f9f67e63e215b9a948fc13ea1c482c87ba3da7c89537d2b954fc21] <==
	I1124 02:31:26.546883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1124 02:31:26.547817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:27.584158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:30.628675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:36.426381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-524458&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1124 02:31:46.348002       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:31:46.348054       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:31:46.348172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:31:46.370611       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:31:46.370675       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:31:46.376507       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:31:46.376952       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:31:46.376970       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:46.378288       1 config.go:200] "Starting service config controller"
	I1124 02:31:46.378312       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:31:46.378299       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:31:46.378358       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:31:46.378397       1 config.go:309] "Starting node config controller"
	I1124 02:31:46.378411       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:31:46.378436       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:31:46.378441       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:31:46.478518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:31:46.478573       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:31:46.478574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 02:31:46.478599       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [bcaa0dfec6478b7b3d78ebb52e18e001772dbf1fb031bbb5f5aeee3f6f2e047b] <==
	I1124 02:30:50.589509       1 server_linux.go:53] "Using iptables proxy"
	I1124 02:30:50.649864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:30:50.750262       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:30:50.750315       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:30:50.750429       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:30:50.775421       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:30:50.775498       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:30:50.781395       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:30:50.781709       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:30:50.781726       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:30:50.783193       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:30:50.783226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:30:50.783251       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:30:50.783250       1 config.go:200] "Starting service config controller"
	I1124 02:30:50.783257       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:30:50.783263       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:30:50.783275       1 config.go:309] "Starting node config controller"
	I1124 02:30:50.783283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:30:50.783291       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:30:50.884070       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:30:50.884192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:30:50.884259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [011ce34e2a26582eb45146eecd7a13e0eecbbfb40048e91c82e8204398646303] <==
	E1124 02:30:41.648514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:30:41.648522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:30:41.648547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:30:41.648550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:30:41.648654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:30:41.648675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:30:41.648745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:30:41.648760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:42.465005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:30:42.515396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:30:42.523703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:30:42.529861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:30:42.625483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:30:42.628516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:42.666309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:30:42.679452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:30:42.825835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:30:42.900062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1124 02:30:45.044111       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.794834       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 02:31:25.794902       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.795210       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 02:31:25.795233       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 02:31:25.795368       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 02:31:25.795398       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [33d9520aecf65cccdf85716e396cb41948cafd4c476f6950b9b870412cbad9b5] <==
	E1124 02:31:32.107552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:31:32.246428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:31:32.256066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:31:32.279681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:31:32.484611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:34.338665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:31:34.646649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:31:35.195266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:31:35.223753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:31:35.322629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:31:35.443983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:31:35.698518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:31:36.005382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:31:36.379704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:31:36.402500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:36.675413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:31:36.904160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:31:36.909692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:31:37.085287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:31:37.194098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:31:37.273123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:31:37.322954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:31:38.185146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:31:39.583259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1124 02:31:46.047493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 02:47:53 functional-524458 kubelet[4784]: E1124 02:47:53.652404    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-zpp9z" podUID="40b778fd-24dc-49c9-9f5c-abfbc7dfe529"
	Nov 24 02:47:54 functional-524458 kubelet[4784]: E1124 02:47:54.652690    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:47:54 functional-524458 kubelet[4784]: E1124 02:47:54.652691    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:47:56 functional-524458 kubelet[4784]: E1124 02:47:56.651794    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:47:59 functional-524458 kubelet[4784]: E1124 02:47:59.655092    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:48:05 functional-524458 kubelet[4784]: E1124 02:48:05.651936    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	Nov 24 02:48:06 functional-524458 kubelet[4784]: I1124 02:48:06.107740    4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-7d85dfc575-bm6nh" podStartSLOduration=1.45380851 podStartE2EDuration="10m59.107720947s" podCreationTimestamp="2025-11-24 02:37:07 +0000 UTC" firstStartedPulling="2025-11-24 02:37:07.53308596 +0000 UTC m=+329.971094420" lastFinishedPulling="2025-11-24 02:48:05.186998397 +0000 UTC m=+987.625006857" observedRunningTime="2025-11-24 02:48:06.107702385 +0000 UTC m=+988.545710866" watchObservedRunningTime="2025-11-24 02:48:06.107720947 +0000 UTC m=+988.545729428"
	Nov 24 02:48:07 functional-524458 kubelet[4784]: E1124 02:48:07.652686    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-zpp9z" podUID="40b778fd-24dc-49c9-9f5c-abfbc7dfe529"
	Nov 24 02:48:09 functional-524458 kubelet[4784]: E1124 02:48:09.651884    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-dbcxf" podUID="d5f11dfa-2d20-453d-87e6-0855f
65e82b0"
	Nov 24 02:48:09 functional-524458 kubelet[4784]: E1124 02:48:09.888275    4784 log.go:32] "PullImage from image service failed" err=<
	Nov 24 02:48:09 functional-524458 kubelet[4784]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests
	Nov 24 02:48:09 functional-524458 kubelet[4784]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Nov 24 02:48:09 functional-524458 kubelet[4784]:  > image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 24 02:48:09 functional-524458 kubelet[4784]: E1124 02:48:09.888319    4784 kuberuntime_image.go:43] "Failed to pull image" err=<
	Nov 24 02:48:09 functional-524458 kubelet[4784]:         failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests
	Nov 24 02:48:09 functional-524458 kubelet[4784]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Nov 24 02:48:09 functional-524458 kubelet[4784]:  > image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 24 02:48:09 functional-524458 kubelet[4784]: E1124 02:48:09.888404    4784 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Nov 24 02:48:09 functional-524458 kubelet[4784]:         container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-88tpq_kubernetes-dashboard(25631234-5164-4822-8c75-7190bda5530f): ErrImagePull: failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests
	Nov 24 02:48:09 functional-524458 kubelet[4784]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Nov 24 02:48:09 functional-524458 kubelet[4784]:  > logger="UnhandledError"
	Nov 24 02:48:09 functional-524458 kubelet[4784]: E1124 02:48:09.888436    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-88tpq" podUID="25631234-5164-4822-8c75-7190bda5530f"
	Nov 24 02:48:10 functional-524458 kubelet[4784]: E1124 02:48:10.652034    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ccb78ded-5900-46ff-be89-2019899a83b5"
	Nov 24 02:48:11 functional-524458 kubelet[4784]: E1124 02:48:11.651607    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9bc43f9e-db29-442d-880b-c8a84389aeec"
	Nov 24 02:48:16 functional-524458 kubelet[4784]: E1124 02:48:16.652200    4784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-8t9t8" podUID="552b6aef-26fa-4446-b5b4-d44e2975e21d"
	
	
	==> storage-provisioner [1933b021444ba331525fa058f0fd57fefe0a1dc2f1ad2bfe07daf3d4de6d2b40] <==
	I1124 02:31:26.369727       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 02:31:26.371564       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [2d6aad34e22f65a6adde2c1908faa771c84b1c7108f288e562d127f56a671c37] <==
	W1124 02:47:51.008100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:53.010680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:53.014495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:55.017893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:55.022305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:57.025566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:57.030165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:59.033577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:47:59.037094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:01.039982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:01.043916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:03.046895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:03.050553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:05.054562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:05.059623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:07.062250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:07.067073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:09.069935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:09.073539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:11.077005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:11.081189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:13.084670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:13.088722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:15.092159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:48:15.096978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-524458 -n functional-524458
helpers_test.go:269: (dbg) Run:  kubectl --context functional-524458 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-8t9t8 mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq: exit status 1 (108.582392ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:06 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://f4209b6719c49eceae60f8419a324de5ece633779675ba9d470e3b8d2a06797c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 02:32:13 +0000
	      Finished:     Mon, 24 Nov 2025 02:32:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vglvc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vglvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  16m   default-scheduler  Successfully assigned default/busybox-mount to functional-524458
	  Normal  Pulling    16m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     16m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.088s (6.425s including waiting). Image size: 2395207 bytes.
	  Normal  Created    16m   kubelet            Created container: mount-munger
	  Normal  Started    16m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-8t9t8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:03 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l4v7c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l4v7c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  16m   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8t9t8 to functional-524458
	  Warning  Failed     16m   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  13m (x5 over 16m)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   13m (x5 over 16m)  kubelet  Error: ErrImagePull
	  Warning  Failed   13m (x4 over 15m)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  63s (x65 over 16m)  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   63s (x65 over 16m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-zpp9z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:38:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knwdf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-knwdf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-zpp9z to functional-524458
	  Warning  Failed     8m19s (x3 over 10m)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  6m48s (x5 over 10m)    kubelet  Pulling image "docker.io/mysql:5.7"
	  Warning  Failed   6m45s (x5 over 10m)    kubelet  Error: ErrImagePull
	  Warning  Failed   6m45s (x2 over 9m43s)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   4m50s (x20 over 10m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m39s (x21 over 10m)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5g4km (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5g4km:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  16m   default-scheduler  Successfully assigned default/nginx-svc to functional-524458
	  Warning  Failed     16m   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  13m (x5 over 16m)  kubelet  Pulling image "docker.io/nginx:alpine"
	  Warning  Failed   13m (x5 over 16m)  kubelet  Error: ErrImagePull
	  Warning  Failed   13m (x4 over 15m)  kubelet  Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  57s (x63 over 16m)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed   57s (x63 over 16m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-524458/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:26 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5dn22 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5dn22:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/sp-pod to functional-524458
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   12m (x5 over 15m)   kubelet  Error: ErrImagePull
	  Normal   BackOff  40s (x62 over 15m)  kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   40s (x62 over 15m)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-dbcxf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-88tpq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-524458 describe pod busybox-mount hello-node-75c85bcc94-8t9t8 mysql-5bb876957f-zpp9z nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-dbcxf kubernetes-dashboard-855c9754f9-88tpq: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-524458 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-524458 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8t9t8" [552b6aef-26fa-4446-b5b4-d44e2975e21d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-524458 -n functional-524458
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-24 02:42:04.296107437 +0000 UTC m=+1069.020461254
functional_test.go:1460: (dbg) Run:  kubectl --context functional-524458 describe po hello-node-75c85bcc94-8t9t8 -n default
functional_test.go:1460: (dbg) kubectl --context functional-524458 describe po hello-node-75c85bcc94-8t9t8 -n default:
Name:             hello-node-75c85bcc94-8t9t8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-524458/192.168.49.2
Start Time:       Mon, 24 Nov 2025 02:32:03 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l4v7c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-l4v7c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age    From               Message
----     ------     ----   ----               -------
Normal   Scheduled  10m    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8t9t8 to functional-524458
Warning  Failed     9m58s  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  6m52s (x5 over 10m)    kubelet  Pulling image "kicbase/echo-server"
Warning  Failed   6m50s (x5 over 9m58s)  kubelet  Error: ErrImagePull
Warning  Failed   6m50s (x4 over 9m43s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   4m44s (x20 over 9m57s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m30s (x21 over 9m57s)  kubelet  Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-524458 logs hello-node-75c85bcc94-8t9t8 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-524458 logs hello-node-75c85bcc94-8t9t8 -n default: exit status 1 (67.250731ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-8t9t8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-524458 logs hello-node-75c85bcc94-8t9t8 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-524458 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [ccb78ded-5900-46ff-be89-2019899a83b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-524458 -n functional-524458
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-11-24 02:36:08.719531865 +0000 UTC m=+713.443885691
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-524458 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-524458 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-524458/192.168.49.2
Start Time:       Mon, 24 Nov 2025 02:32:08 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5g4km (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5g4km:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age    From               Message
----     ------     ----   ----               -------
Normal   Scheduled  4m     default-scheduler  Successfully assigned default/nginx-svc to functional-524458
Warning  Failed     3m52s  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  59s (x5 over 4m)     kubelet  Pulling image "docker.io/nginx:alpine"
Warning  Failed   57s (x5 over 3m52s)  kubelet  Error: ErrImagePull
Warning  Failed   57s (x4 over 3m36s)  kubelet  Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff  7s (x13 over 3m52s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed   7s (x13 over 3m52s)  kubelet  Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-524458 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-524458 logs nginx-svc -n default: exit status 1 (67.531921ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-524458 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1124 02:36:08.849830    8429 retry.go:31] will retry after 2.170434184s: Temporary Error: Get "http:": http: no Host in request URL
I1124 02:36:11.021137    8429 retry.go:31] will retry after 3.874078137s: Temporary Error: Get "http:": http: no Host in request URL
I1124 02:36:14.896227    8429 retry.go:31] will retry after 7.816678218s: Temporary Error: Get "http:": http: no Host in request URL
I1124 02:36:22.713148    8429 retry.go:31] will retry after 10.570037238s: Temporary Error: Get "http:": http: no Host in request URL
I1124 02:36:33.283325    8429 retry.go:31] will retry after 8.910302032s: Temporary Error: Get "http:": http: no Host in request URL
I1124 02:36:42.194172    8429 retry.go:31] will retry after 31.180797774s: Temporary Error: Get "http:": http: no Host in request URL
E1124 02:36:46.514382    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-524458 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.101.90.125   10.101.90.125   80:30185/TCP   5m56s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 service --namespace=default --https --url hello-node: exit status 115 (549.135972ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30949
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-524458 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 service hello-node --url --format={{.IP}}: exit status 115 (546.356441ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-524458 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 service hello-node --url: exit status 115 (544.935756ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30949
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-524458 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30949
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-838815 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f67fa448-2a4c-4ead-ad79-cb799abf6b94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f67fa448-2a4c-4ead-ad79-cb799abf6b94] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003445558s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-838815 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-838815
helpers_test.go:243: (dbg) docker inspect old-k8s-version-838815:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454",
	        "Created": "2025-11-24T03:12:31.944882861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248202,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:12:31.98793006Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454/hostname",
	        "HostsPath": "/var/lib/docker/containers/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454/hosts",
	        "LogPath": "/var/lib/docker/containers/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454-json.log",
	        "Name": "/old-k8s-version-838815",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-838815:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-838815",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454",
	                "LowerDir": "/var/lib/docker/overlay2/7e8afc31b2aacaf13a24d2863b14f38631ffaeb5491173a1dfea41c472591119-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e8afc31b2aacaf13a24d2863b14f38631ffaeb5491173a1dfea41c472591119/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e8afc31b2aacaf13a24d2863b14f38631ffaeb5491173a1dfea41c472591119/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e8afc31b2aacaf13a24d2863b14f38631ffaeb5491173a1dfea41c472591119/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-838815",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-838815/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-838815",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-838815",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-838815",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "46af0e078d7cd48693c4e170b95a2847ff848847fb3f7442eb76e8b3ddca1a8c",
	            "SandboxKey": "/var/run/docker/netns/46af0e078d7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-838815": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1d770a7414674547bb7a07352cd1f0600b0bb79b425bd0f5ea101ba2a99a33",
	                    "EndpointID": "fdc1dc1f92c27f97b6bdf1fc3d0aab0d23bf492eb0e6756f9ee39200b64fa8d1",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "82:72:23:e5:41:46",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-838815",
	                        "e16c6bbd00c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838815 -n old-k8s-version-838815
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-838815 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-838815 logs -n 25: (1.139191415s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-682898 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo docker system info                                                                                                                                                                                                            │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ start   │ -p NoKubernetes-502612 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ ssh     │ -p cilium-682898 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo containerd config dump                                                                                                                                                                                                        │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo crio config                                                                                                                                                                                                                   │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p cilium-682898                                                                                                                                                                                                                                    │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:13 UTC │
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ stop    │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p NoKubernetes-502612 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:13:19
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:13:19.186725  254321 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:19.186836  254321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:19.186839  254321 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:19.186843  254321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:19.187066  254321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:13:19.187500  254321 out.go:368] Setting JSON to false
	I1124 03:13:19.188616  254321 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3342,"bootTime":1763950657,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:13:19.188666  254321 start.go:143] virtualization: kvm guest
	I1124 03:13:19.191025  254321 out.go:179] * [NoKubernetes-502612] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:13:19.192398  254321 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:13:19.192398  254321 notify.go:221] Checking for updates...
	I1124 03:13:19.193966  254321 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:13:19.195465  254321 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:13:19.197096  254321 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 03:13:19.198416  254321 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:13:19.199724  254321 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:13:19.201609  254321 config.go:182] Loaded profile config "NoKubernetes-502612": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1124 03:13:19.202205  254321 start.go:1806] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1124 03:13:19.202226  254321 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:13:19.227883  254321 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:13:19.227990  254321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:19.288040  254321 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 03:13:19.277620942 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:19.288184  254321 docker.go:319] overlay module found
	I1124 03:13:19.290169  254321 out.go:179] * Using the docker driver based on existing profile
	I1124 03:13:19.291318  254321 start.go:309] selected driver: docker
	I1124 03:13:19.291325  254321 start.go:927] validating driver "docker" against &{Name:NoKubernetes-502612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-502612 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:19.291392  254321 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:13:19.291465  254321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:19.356693  254321 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 03:13:19.345183993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:19.357365  254321 cni.go:84] Creating CNI manager for ""
	I1124 03:13:19.357417  254321 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:19.357454  254321 start.go:353] cluster config:
	{Name:NoKubernetes-502612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-502612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:19.359767  254321 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-502612
	I1124 03:13:19.360894  254321 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:13:19.362125  254321 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:13:19.363125  254321 preload.go:188] Checking if preload exists for k8s version v0.0.0 and runtime containerd
	I1124 03:13:19.363226  254321 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:13:19.388000  254321 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:13:19.388011  254321 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	W1124 03:13:19.692535  254321 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	W1124 03:13:19.864530  254321 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I1124 03:13:19.864644  254321 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/NoKubernetes-502612/config.json ...
	I1124 03:13:19.864880  254321 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:13:19.864917  254321 start.go:360] acquireMachinesLock for NoKubernetes-502612: {Name:mkb4d5eb02261105fd88223308c9e769a19f13a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:19.864969  254321 start.go:364] duration metric: took 36.311µs to acquireMachinesLock for "NoKubernetes-502612"
	I1124 03:13:19.864982  254321 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:13:19.864985  254321 fix.go:54] fixHost starting: 
	I1124 03:13:19.865179  254321 cli_runner.go:164] Run: docker container inspect NoKubernetes-502612 --format={{.State.Status}}
	I1124 03:13:19.886715  254321 fix.go:112] recreateIfNeeded on NoKubernetes-502612: state=Stopped err=<nil>
	W1124 03:13:19.886738  254321 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:13:18.380843  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:19.365382  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:38058->192.168.76.2:8443: read: connection reset by peer
	I1124 03:13:19.365443  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:19.365490  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:19.398210  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:19.398231  222154 cri.go:89] found id: "5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07"
	I1124 03:13:19.398236  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:19.398241  222154 cri.go:89] found id: ""
	I1124 03:13:19.398250  222154 logs.go:282] 3 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:19.398305  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:19.402960  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:19.406600  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:19.410238  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:19.410306  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:19.437856  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:19.437879  222154 cri.go:89] found id: ""
	I1124 03:13:19.437889  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:19.437946  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:19.442018  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:19.442090  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:19.469550  222154 cri.go:89] found id: ""
	I1124 03:13:19.469574  222154 logs.go:282] 0 containers: []
	W1124 03:13:19.469587  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:19.469593  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:19.469636  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:19.501401  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:19.501424  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:19.501430  222154 cri.go:89] found id: ""
	I1124 03:13:19.501437  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:19.501489  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:19.505413  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:19.509244  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:19.509319  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:19.536258  222154 cri.go:89] found id: ""
	I1124 03:13:19.536282  222154 logs.go:282] 0 containers: []
	W1124 03:13:19.536289  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:19.536295  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:19.536346  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:19.563496  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:19.563516  222154 cri.go:89] found id: "6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421"
	I1124 03:13:19.563519  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:19.563522  222154 cri.go:89] found id: ""
	I1124 03:13:19.563530  222154 logs.go:282] 3 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:19.563585  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:19.567439  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:19.571052  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:19.574558  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:19.574602  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:19.602085  222154 cri.go:89] found id: ""
	I1124 03:13:19.602108  222154 logs.go:282] 0 containers: []
	W1124 03:13:19.602119  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:19.602127  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:19.602189  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:19.630563  222154 cri.go:89] found id: ""
	I1124 03:13:19.630587  222154 logs.go:282] 0 containers: []
	W1124 03:13:19.630595  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:19.630604  222154 logs.go:123] Gathering logs for kube-controller-manager [6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421] ...
	I1124 03:13:19.630619  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421"
	I1124 03:13:19.662015  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:19.662040  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:19.697571  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:19.697598  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:19.752061  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:19.752094  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:19.839994  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:19.840024  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:19.853431  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:19.853454  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:19.886214  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:19.886243  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:19.936220  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:19.936253  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:19.969210  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:19.969238  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:19.998280  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:19.998304  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:20.031834  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:20.031867  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:20.100281  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:20.100305  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:20.100322  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:20.136349  222154 logs.go:123] Gathering logs for kube-apiserver [5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07] ...
	I1124 03:13:20.136376  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07"
	W1124 03:13:20.163465  222154 logs.go:130] failed kube-apiserver [5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07": Process exited with status 1
	stdout:
	
	stderr:
	E1124 03:13:20.161385    4220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07\": not found" containerID="5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07"
	time="2025-11-24T03:13:20Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07\": not found"
	 output: 
	** stderr ** 
	E1124 03:13:20.161385    4220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07\": not found" containerID="5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07"
	time="2025-11-24T03:13:20Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e273b195fe340b6e868c3364d0ad579655f306b1231182b572b27532ee0cc07\": not found"
	
	** /stderr **
	I1124 03:13:20.163483  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:20.163496  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:19.889300  254321 out.go:252] * Restarting existing docker container for "NoKubernetes-502612" ...
	I1124 03:13:19.889369  254321 cli_runner.go:164] Run: docker start NoKubernetes-502612
	I1124 03:13:20.202106  254321 cli_runner.go:164] Run: docker container inspect NoKubernetes-502612 --format={{.State.Status}}
	I1124 03:13:20.222209  254321 kic.go:430] container "NoKubernetes-502612" state is running.
	I1124 03:13:20.222614  254321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-502612
	I1124 03:13:20.241621  254321 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/NoKubernetes-502612/config.json ...
	I1124 03:13:20.241885  254321 machine.go:94] provisionDockerMachine start ...
	I1124 03:13:20.241951  254321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-502612
	I1124 03:13:20.261696  254321 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:20.261944  254321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33062 <nil> <nil>}
	I1124 03:13:20.261950  254321 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:13:20.262484  254321 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59352->127.0.0.1:33062: read: connection reset by peer
	I1124 03:13:23.408694  254321 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-502612
	
	I1124 03:13:23.408714  254321 ubuntu.go:182] provisioning hostname "NoKubernetes-502612"
	I1124 03:13:23.408763  254321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-502612
	I1124 03:13:23.427670  254321 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:23.427987  254321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33062 <nil> <nil>}
	I1124 03:13:23.427997  254321 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-502612 && echo "NoKubernetes-502612" | sudo tee /etc/hostname
	I1124 03:13:23.578589  254321 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-502612
	
	I1124 03:13:23.578650  254321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-502612
	I1124 03:13:23.597474  254321 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:23.597670  254321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33062 <nil> <nil>}
	I1124 03:13:23.597680  254321 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-502612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-502612/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-502612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:13:23.737490  254321 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:13:23.737509  254321 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 03:13:23.737542  254321 ubuntu.go:190] setting up certificates
	I1124 03:13:23.737553  254321 provision.go:84] configureAuth start
	I1124 03:13:23.737600  254321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-502612
	I1124 03:13:23.755556  254321 provision.go:143] copyHostCerts
	I1124 03:13:23.755606  254321 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 03:13:23.755617  254321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 03:13:23.755677  254321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 03:13:23.755830  254321 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 03:13:23.755837  254321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 03:13:23.755878  254321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 03:13:23.755945  254321 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 03:13:23.755948  254321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 03:13:23.755971  254321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 03:13:23.756045  254321 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-502612 san=[127.0.0.1 192.168.85.2 NoKubernetes-502612 localhost minikube]
	I1124 03:13:23.857667  254321 provision.go:177] copyRemoteCerts
	I1124 03:13:23.857713  254321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:13:23.857742  254321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-502612
	I1124 03:13:23.876051  254321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/NoKubernetes-502612/id_rsa Username:docker}
	I1124 03:13:23.975032  254321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:13:23.993957  254321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:13:24.012083  254321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:13:24.028929  254321 provision.go:87] duration metric: took 291.367917ms to configureAuth
	I1124 03:13:24.028945  254321 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:13:24.029090  254321 config.go:182] Loaded profile config "NoKubernetes-502612": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1124 03:13:24.029095  254321 machine.go:97] duration metric: took 3.787204526s to provisionDockerMachine
	I1124 03:13:24.029110  254321 start.go:293] postStartSetup for "NoKubernetes-502612" (driver="docker")
	I1124 03:13:24.029120  254321 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:13:24.029176  254321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:13:24.029213  254321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-502612
	I1124 03:13:24.048128  254321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/NoKubernetes-502612/id_rsa Username:docker}
	I1124 03:13:24.148942  254321 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:13:24.152547  254321 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:13:24.152562  254321 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:13:24.152570  254321 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 03:13:24.152615  254321 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 03:13:24.152690  254321 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem -> 84292.pem in /etc/ssl/certs
	I1124 03:13:24.152764  254321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:13:24.160351  254321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:13:24.177649  254321 start.go:296] duration metric: took 148.525903ms for postStartSetup
	I1124 03:13:24.177734  254321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:13:24.177794  254321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-502612
	I1124 03:13:24.196142  254321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/NoKubernetes-502612/id_rsa Username:docker}
	I1124 03:13:24.292958  254321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:13:24.297912  254321 fix.go:56] duration metric: took 4.43291322s for fixHost
	I1124 03:13:24.297937  254321 start.go:83] releasing machines lock for "NoKubernetes-502612", held for 4.432953463s
	I1124 03:13:24.297998  254321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-502612
	I1124 03:13:24.316705  254321 ssh_runner.go:195] Run: cat /version.json
	I1124 03:13:24.316747  254321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-502612
	I1124 03:13:24.316747  254321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:13:24.316819  254321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-502612
	I1124 03:13:24.335893  254321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/NoKubernetes-502612/id_rsa Username:docker}
	I1124 03:13:24.336166  254321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33062 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/NoKubernetes-502612/id_rsa Username:docker}
	I1124 03:13:24.432168  254321 ssh_runner.go:195] Run: systemctl --version
	I1124 03:13:24.492886  254321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:13:24.497749  254321 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:13:24.497822  254321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:13:24.506063  254321 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:13:24.506076  254321 start.go:496] detecting cgroup driver to use...
	I1124 03:13:24.506106  254321 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:13:24.506162  254321 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:13:24.521666  254321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:13:24.534118  254321 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:13:24.534173  254321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:13:24.549847  254321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:13:24.562615  254321 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:13:24.641446  254321 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:13:24.725948  254321 docker.go:234] disabling docker service ...
	I1124 03:13:24.726005  254321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:13:24.740706  254321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:13:24.753410  254321 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:13:24.834367  254321 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:13:24.917672  254321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:13:24.930627  254321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:13:24.945559  254321 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1124 03:13:25.475539  254321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 03:13:25.485312  254321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:13:25.494595  254321 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 03:13:25.494696  254321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 03:13:25.503529  254321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:13:25.512526  254321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:13:25.521591  254321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:13:25.530299  254321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:13:25.538509  254321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:13:25.547488  254321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:13:25.555265  254321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:13:25.562794  254321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:25.644464  254321 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:13:25.718306  254321 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:13:25.718358  254321 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:13:25.723014  254321 start.go:564] Will wait 60s for crictl version
	I1124 03:13:25.723064  254321 ssh_runner.go:195] Run: which crictl
	I1124 03:13:25.726735  254321 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:13:25.752869  254321 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:13:25.752926  254321 ssh_runner.go:195] Run: containerd --version
	I1124 03:13:25.774698  254321 ssh_runner.go:195] Run: containerd --version
	I1124 03:13:25.798080  254321 out.go:179] * Preparing containerd 2.1.5 ...
	I1124 03:13:25.799466  254321 ssh_runner.go:195] Run: rm -f paused
	I1124 03:13:25.804822  254321 out.go:179] * Done! minikube is ready without Kubernetes!
	I1124 03:13:25.806807  254321 out.go:203] ╭──────────────────────────────────────────────────────────╮
	│                                                          │
	│          * Things to try without Kubernetes ...          │
	│                                                          │
	│    - "minikube ssh" to SSH into minikube's node.         │
	│    - "minikube image" to build images without docker.    │
	│                                                          │
	╰──────────────────────────────────────────────────────────╯
	I1124 03:13:22.702688  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:22.703152  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:22.703199  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:22.703245  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:22.729763  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:22.729794  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:22.729800  222154 cri.go:89] found id: ""
	I1124 03:13:22.729810  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:22.729862  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:22.733746  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:22.737170  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:22.737209  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:22.761700  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:22.761718  222154 cri.go:89] found id: ""
	I1124 03:13:22.761726  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:22.761803  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:22.765624  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:22.765683  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:22.791238  222154 cri.go:89] found id: ""
	I1124 03:13:22.791264  222154 logs.go:282] 0 containers: []
	W1124 03:13:22.791271  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:22.791276  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:22.791327  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:22.817462  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:22.817478  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:22.817482  222154 cri.go:89] found id: ""
	I1124 03:13:22.817488  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:22.817531  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:22.821942  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:22.825810  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:22.825879  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:22.851340  222154 cri.go:89] found id: ""
	I1124 03:13:22.851360  222154 logs.go:282] 0 containers: []
	W1124 03:13:22.851367  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:22.851373  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:22.851416  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:22.877248  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:22.877265  222154 cri.go:89] found id: "6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421"
	I1124 03:13:22.877269  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:22.877272  222154 cri.go:89] found id: ""
	I1124 03:13:22.877279  222154 logs.go:282] 3 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:22.877321  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:22.881142  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:22.884760  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:22.888235  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:22.888288  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:22.913735  222154 cri.go:89] found id: ""
	I1124 03:13:22.913761  222154 logs.go:282] 0 containers: []
	W1124 03:13:22.913770  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:22.913790  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:22.913847  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:22.942229  222154 cri.go:89] found id: ""
	I1124 03:13:22.942259  222154 logs.go:282] 0 containers: []
	W1124 03:13:22.942266  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:22.942275  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:22.942287  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:23.023219  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:23.023257  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:23.038389  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:23.038413  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:23.095976  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:23.095998  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:23.096014  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:23.128511  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:23.128540  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:23.159649  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:23.159675  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:23.194449  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:23.194481  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:23.221504  222154 logs.go:123] Gathering logs for kube-controller-manager [6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421] ...
	I1124 03:13:23.221527  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421"
	I1124 03:13:23.248908  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:23.248934  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:23.285277  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:23.285305  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:23.337508  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:23.337536  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:23.374223  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:23.374252  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:23.417261  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:23.417294  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:25.950871  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:25.951257  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:25.951305  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:25.951344  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:25.981318  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:25.981338  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:25.981349  222154 cri.go:89] found id: ""
	I1124 03:13:25.981358  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:25.981415  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:25.985471  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:25.989351  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:25.989420  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:26.016267  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:26.016291  222154 cri.go:89] found id: ""
	I1124 03:13:26.016301  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:26.016358  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:26.020276  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:26.020335  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:26.046355  222154 cri.go:89] found id: ""
	I1124 03:13:26.046378  222154 logs.go:282] 0 containers: []
	W1124 03:13:26.046386  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:26.046393  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:26.046448  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:26.072432  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:26.072454  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:26.072460  222154 cri.go:89] found id: ""
	I1124 03:13:26.072469  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:26.072523  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:26.077366  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:26.081199  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:26.081252  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:26.109928  222154 cri.go:89] found id: ""
	I1124 03:13:26.109956  222154 logs.go:282] 0 containers: []
	W1124 03:13:26.109967  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:26.109975  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:26.110036  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:26.137005  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:26.137025  222154 cri.go:89] found id: "6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421"
	I1124 03:13:26.137030  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:26.137033  222154 cri.go:89] found id: ""
	I1124 03:13:26.137040  222154 logs.go:282] 3 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 6fe7dfb21b4d4810bb61c922149c1f7d5cca75a718abf3f311db506fbc8e6421 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:26.137087  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:26.141655  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:26.145465  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:26.149050  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:26.149091  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	050a9d62b1fae       56cc512116c8f       8 seconds ago       Running             busybox                   0                   92cf5351ffb4f       busybox                                          default
	9c967be134687       ead0a4a53df89       13 seconds ago      Running             coredns                   0                   bff0a5e0d7385       coredns-5dd5756b68-gfsqm                         kube-system
	d417c8d3e5028       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   ea1c2bcd52579       storage-provisioner                              kube-system
	da6efdd3aa62d       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   eb624250bfd1b       kindnet-rvm46                                    kube-system
	5252475449db6       ea1030da44aa1       27 seconds ago      Running             kube-proxy                0                   78911a1c265aa       kube-proxy-cz68g                                 kube-system
	ba673dc701109       73deb9a3f7025       45 seconds ago      Running             etcd                      0                   ad0d441d46554       etcd-old-k8s-version-838815                      kube-system
	6d5b31c71edc4       bb5e0dde9054c       45 seconds ago      Running             kube-apiserver            0                   4c1c4c6ae28a1       kube-apiserver-old-k8s-version-838815            kube-system
	6d6e12d242d5e       f6f496300a2ae       45 seconds ago      Running             kube-scheduler            0                   6a18181f2c948       kube-scheduler-old-k8s-version-838815            kube-system
	f861f902328c3       4be79c38a4bab       45 seconds ago      Running             kube-controller-manager   0                   50cddfad68062       kube-controller-manager-old-k8s-version-838815   kube-system
	
	
	==> containerd <==
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.343520631Z" level=info msg="Container 9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.343525065Z" level=info msg="StartContainer for \"d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770\""
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.344623873Z" level=info msg="connecting to shim d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770" address="unix:///run/containerd/s/672aa13e022fadb35f4b054f5001b401a61146b514b12ffda526019465039f4c" protocol=ttrpc version=3
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.349932935Z" level=info msg="CreateContainer within sandbox \"bff0a5e0d7385183b5cd063a7ee6b2d0c23136b1879333f731dc62113d829a90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39\""
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.350531386Z" level=info msg="StartContainer for \"9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39\""
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.351524659Z" level=info msg="connecting to shim 9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39" address="unix:///run/containerd/s/5120a4bf9e25d7330f4a14e0d710619beecdcf6f370be50e3cd3a2ea9899d637" protocol=ttrpc version=3
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.393348174Z" level=info msg="StartContainer for \"d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770\" returns successfully"
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.393636097Z" level=info msg="StartContainer for \"9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39\" returns successfully"
	Nov 24 03:13:17 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:17.386486264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f67fa448-2a4c-4ead-ad79-cb799abf6b94,Namespace:default,Attempt:0,}"
	Nov 24 03:13:17 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:17.430863676Z" level=info msg="connecting to shim 92cf5351ffb4f515af1b587d1f0fee9a7329fb98ed2b7cafd752a27fe2a38ba8" address="unix:///run/containerd/s/d5e15bc8d9c15638c6cf71bda15dfc05e7182e981b25fed11ce6e2e1d044487e" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:13:17 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:17.504184159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f67fa448-2a4c-4ead-ad79-cb799abf6b94,Namespace:default,Attempt:0,} returns sandbox id \"92cf5351ffb4f515af1b587d1f0fee9a7329fb98ed2b7cafd752a27fe2a38ba8\""
	Nov 24 03:13:17 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:17.506016880Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.758901709Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.759824351Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.761067284Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.763272566Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.763722978Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.257658472s"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.763763894Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.765696940Z" level=info msg="CreateContainer within sandbox \"92cf5351ffb4f515af1b587d1f0fee9a7329fb98ed2b7cafd752a27fe2a38ba8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.773837124Z" level=info msg="Container 050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.779819048Z" level=info msg="CreateContainer within sandbox \"92cf5351ffb4f515af1b587d1f0fee9a7329fb98ed2b7cafd752a27fe2a38ba8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283\""
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.780433197Z" level=info msg="StartContainer for \"050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283\""
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.781242290Z" level=info msg="connecting to shim 050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283" address="unix:///run/containerd/s/d5e15bc8d9c15638c6cf71bda15dfc05e7182e981b25fed11ce6e2e1d044487e" protocol=ttrpc version=3
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.830895969Z" level=info msg="StartContainer for \"050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283\" returns successfully"
	Nov 24 03:13:27 old-k8s-version-838815 containerd[662]: E1124 03:13:27.152955     662 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50204 - 39212 "HINFO IN 6376129420371334241.7776922599207551710. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030113409s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-838815
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-838815
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-838815
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_12_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:12:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-838815
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:13:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:13:18 +0000   Mon, 24 Nov 2025 03:12:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:13:18 +0000   Mon, 24 Nov 2025 03:12:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:13:18 +0000   Mon, 24 Nov 2025 03:12:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:13:18 +0000   Mon, 24 Nov 2025 03:13:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-838815
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                ab7830c7-5c46-485b-9e98-41065a0d51fb
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-gfsqm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-838815                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-rvm46                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-838815             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-838815    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-cz68g                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-838815             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  41s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-838815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-838815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-838815 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-838815 event: Registered Node old-k8s-version-838815 in Controller
	  Normal  NodeReady                15s   kubelet          Node old-k8s-version-838815 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ba673dc701109bf125ff9985c0914f2ba2109e73d86e870cceda5494df539e38] <==
	{"level":"info","ts":"2025-11-24T03:12:42.539734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-24T03:12:42.53991Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-24T03:12:42.541913Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T03:12:42.542035Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-24T03:12:42.54215Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-24T03:12:42.542278Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T03:12:42.542332Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T03:12:42.731051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T03:12:42.731099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T03:12:42.73113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-24T03:12:42.731149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T03:12:42.731157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T03:12:42.731169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-24T03:12:42.731179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T03:12:42.731992Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-838815 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T03:12:42.732033Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:12:42.732111Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:12:42.732283Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T03:12:42.732858Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T03:12:42.732376Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:12:42.733679Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:12:42.733915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T03:12:42.733946Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:12:42.733991Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:12:42.734198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 03:13:28 up 55 min,  0 user,  load average: 2.27, 2.81, 1.88
	Linux old-k8s-version-838815 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [da6efdd3aa62d69f1d169afe237a09597925d965af4ae63cb4a3d5c4fdec4a9e] <==
	I1124 03:13:03.628592       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:13:03.628864       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:13:03.629021       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:13:03.629037       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:13:03.629056       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:13:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:13:03.923696       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:13:03.923812       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:13:03.923827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:13:03.928287       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:13:04.323206       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:13:04.323241       1 metrics.go:72] Registering metrics
	I1124 03:13:04.323390       1 controller.go:711] "Syncing nftables rules"
	I1124 03:13:13.834505       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:13:13.834577       1 main.go:301] handling current node
	I1124 03:13:23.831969       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:13:23.832009       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d5b31c71edc46daad185ace0e1d3f5ec67dd2787b6d503af150ed6b776dd725] <==
	I1124 03:12:44.304286       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:12:44.304446       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 03:12:44.304500       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 03:12:44.305715       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 03:12:44.311300       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1124 03:12:44.311418       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 03:12:44.312309       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 03:12:44.345076       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:12:44.352770       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 03:12:45.215593       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:12:45.220057       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:12:45.220077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:12:45.621298       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:12:45.655716       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:12:45.718146       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:12:45.723374       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 03:12:45.724339       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 03:12:45.728024       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:12:46.249563       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 03:12:47.261089       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 03:12:47.272494       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:12:47.283453       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 03:12:59.912069       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:12:59.912147       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:12:59.957889       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f861f902328c35216c5237199b026c1c5955de0259a65cb749000ef69844ea95] <==
	I1124 03:12:59.302029       1 shared_informer.go:318] Caches are synced for endpoint
	I1124 03:12:59.305619       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 03:12:59.622847       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:12:59.652268       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:12:59.652302       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 03:12:59.923016       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rvm46"
	I1124 03:12:59.925308       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cz68g"
	I1124 03:12:59.962166       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 03:13:00.115059       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6vrfh"
	I1124 03:13:00.122441       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gfsqm"
	I1124 03:13:00.129502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="167.92642ms"
	I1124 03:13:00.141994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.428013ms"
	I1124 03:13:00.142125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.703µs"
	I1124 03:13:00.142552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.68µs"
	I1124 03:13:00.244953       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 03:13:00.260184       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-6vrfh"
	I1124 03:13:00.266202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.285563ms"
	I1124 03:13:00.270756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="4.497899ms"
	I1124 03:13:00.270915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.256µs"
	I1124 03:13:13.921589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.633µs"
	I1124 03:13:13.933090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.668µs"
	I1124 03:13:14.064561       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1124 03:13:14.436955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="172.739µs"
	I1124 03:13:15.447731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.036467ms"
	I1124 03:13:15.447889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.488µs"
	
	
	==> kube-proxy [5252475449db61ed023b07a2c7783bea6f77e7aad8afe357a282907f58383b49] <==
	I1124 03:13:00.553814       1 server_others.go:69] "Using iptables proxy"
	I1124 03:13:00.562245       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1124 03:13:00.583877       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:13:00.586348       1 server_others.go:152] "Using iptables Proxier"
	I1124 03:13:00.586605       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 03:13:00.586633       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 03:13:00.586674       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 03:13:00.587025       1 server.go:846] "Version info" version="v1.28.0"
	I1124 03:13:00.587044       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:13:00.588214       1 config.go:188] "Starting service config controller"
	I1124 03:13:00.588236       1 config.go:97] "Starting endpoint slice config controller"
	I1124 03:13:00.588269       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 03:13:00.588256       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 03:13:00.588337       1 config.go:315] "Starting node config controller"
	I1124 03:13:00.588346       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 03:13:00.689029       1 shared_informer.go:318] Caches are synced for service config
	I1124 03:13:00.689045       1 shared_informer.go:318] Caches are synced for node config
	I1124 03:13:00.689074       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6d6e12d242d5e9f46758e6fc6e8d424eb9bd8d2f091a9c6be9a834d07c08f917] <==
	W1124 03:12:44.275669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 03:12:44.275724       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 03:12:44.275599       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1124 03:12:44.275805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1124 03:12:44.275845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 03:12:44.276206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 03:12:44.275849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 03:12:44.276368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1124 03:12:45.097029       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 03:12:45.097062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 03:12:45.129902       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 03:12:45.129937       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:12:45.142527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 03:12:45.142564       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 03:12:45.259310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 03:12:45.259350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 03:12:45.333621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 03:12:45.333668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 03:12:45.363066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 03:12:45.363103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 03:12:45.377571       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 03:12:45.377612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 03:12:45.447921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 03:12:45.447969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1124 03:12:47.670326       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 03:12:59 old-k8s-version-838815 kubelet[1523]: I1124 03:12:59.091469    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:12:59 old-k8s-version-838815 kubelet[1523]: I1124 03:12:59.936533    1523 topology_manager.go:215] "Topology Admit Handler" podUID="f375e199-56a3-44e4-97fb-08f38dc56b33" podNamespace="kube-system" podName="kindnet-rvm46"
	Nov 24 03:12:59 old-k8s-version-838815 kubelet[1523]: I1124 03:12:59.936698    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d975541d-c6d9-4d84-8dc6-4ee5db7a575f" podNamespace="kube-system" podName="kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111239    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d975541d-c6d9-4d84-8dc6-4ee5db7a575f-kube-proxy\") pod \"kube-proxy-cz68g\" (UID: \"d975541d-c6d9-4d84-8dc6-4ee5db7a575f\") " pod="kube-system/kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111293    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d975541d-c6d9-4d84-8dc6-4ee5db7a575f-xtables-lock\") pod \"kube-proxy-cz68g\" (UID: \"d975541d-c6d9-4d84-8dc6-4ee5db7a575f\") " pod="kube-system/kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111322    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f375e199-56a3-44e4-97fb-08f38dc56b33-xtables-lock\") pod \"kindnet-rvm46\" (UID: \"f375e199-56a3-44e4-97fb-08f38dc56b33\") " pod="kube-system/kindnet-rvm46"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111353    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d975541d-c6d9-4d84-8dc6-4ee5db7a575f-lib-modules\") pod \"kube-proxy-cz68g\" (UID: \"d975541d-c6d9-4d84-8dc6-4ee5db7a575f\") " pod="kube-system/kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111414    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzv5\" (UniqueName: \"kubernetes.io/projected/d975541d-c6d9-4d84-8dc6-4ee5db7a575f-kube-api-access-jmzv5\") pod \"kube-proxy-cz68g\" (UID: \"d975541d-c6d9-4d84-8dc6-4ee5db7a575f\") " pod="kube-system/kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111474    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f375e199-56a3-44e4-97fb-08f38dc56b33-cni-cfg\") pod \"kindnet-rvm46\" (UID: \"f375e199-56a3-44e4-97fb-08f38dc56b33\") " pod="kube-system/kindnet-rvm46"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111519    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f375e199-56a3-44e4-97fb-08f38dc56b33-lib-modules\") pod \"kindnet-rvm46\" (UID: \"f375e199-56a3-44e4-97fb-08f38dc56b33\") " pod="kube-system/kindnet-rvm46"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111547    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngr5\" (UniqueName: \"kubernetes.io/projected/f375e199-56a3-44e4-97fb-08f38dc56b33-kube-api-access-lngr5\") pod \"kindnet-rvm46\" (UID: \"f375e199-56a3-44e4-97fb-08f38dc56b33\") " pod="kube-system/kindnet-rvm46"
	Nov 24 03:13:04 old-k8s-version-838815 kubelet[1523]: I1124 03:13:04.410053    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cz68g" podStartSLOduration=5.409992605 podCreationTimestamp="2025-11-24 03:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:01.402594532 +0000 UTC m=+14.168390102" watchObservedRunningTime="2025-11-24 03:13:04.409992605 +0000 UTC m=+17.175788176"
	Nov 24 03:13:04 old-k8s-version-838815 kubelet[1523]: I1124 03:13:04.410389    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-rvm46" podStartSLOduration=2.741217364 podCreationTimestamp="2025-11-24 03:12:59 +0000 UTC" firstStartedPulling="2025-11-24 03:13:00.64654656 +0000 UTC m=+13.412342123" lastFinishedPulling="2025-11-24 03:13:03.315690251 +0000 UTC m=+16.081485812" observedRunningTime="2025-11-24 03:13:04.409957816 +0000 UTC m=+17.175753387" watchObservedRunningTime="2025-11-24 03:13:04.410361053 +0000 UTC m=+17.176156622"
	Nov 24 03:13:13 old-k8s-version-838815 kubelet[1523]: I1124 03:13:13.900080    1523 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 03:13:13 old-k8s-version-838815 kubelet[1523]: I1124 03:13:13.921724    1523 topology_manager.go:215] "Topology Admit Handler" podUID="afa1f94c-8c55-4847-9152-189f27ff812a" podNamespace="kube-system" podName="coredns-5dd5756b68-gfsqm"
	Nov 24 03:13:13 old-k8s-version-838815 kubelet[1523]: I1124 03:13:13.923720    1523 topology_manager.go:215] "Topology Admit Handler" podUID="1dc12010-009c-4a23-af68-7bbba3679259" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.117227    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkq7m\" (UniqueName: \"kubernetes.io/projected/afa1f94c-8c55-4847-9152-189f27ff812a-kube-api-access-lkq7m\") pod \"coredns-5dd5756b68-gfsqm\" (UID: \"afa1f94c-8c55-4847-9152-189f27ff812a\") " pod="kube-system/coredns-5dd5756b68-gfsqm"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.117284    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afa1f94c-8c55-4847-9152-189f27ff812a-config-volume\") pod \"coredns-5dd5756b68-gfsqm\" (UID: \"afa1f94c-8c55-4847-9152-189f27ff812a\") " pod="kube-system/coredns-5dd5756b68-gfsqm"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.117367    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ct7h\" (UniqueName: \"kubernetes.io/projected/1dc12010-009c-4a23-af68-7bbba3679259-kube-api-access-9ct7h\") pod \"storage-provisioner\" (UID: \"1dc12010-009c-4a23-af68-7bbba3679259\") " pod="kube-system/storage-provisioner"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.117505    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1dc12010-009c-4a23-af68-7bbba3679259-tmp\") pod \"storage-provisioner\" (UID: \"1dc12010-009c-4a23-af68-7bbba3679259\") " pod="kube-system/storage-provisioner"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.436733    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gfsqm" podStartSLOduration=14.436682129 podCreationTimestamp="2025-11-24 03:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:14.436477472 +0000 UTC m=+27.202273043" watchObservedRunningTime="2025-11-24 03:13:14.436682129 +0000 UTC m=+27.202477697"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.446000    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.445945188 podCreationTimestamp="2025-11-24 03:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:14.44580865 +0000 UTC m=+27.211604213" watchObservedRunningTime="2025-11-24 03:13:14.445945188 +0000 UTC m=+27.211740758"
	Nov 24 03:13:17 old-k8s-version-838815 kubelet[1523]: I1124 03:13:17.075828    1523 topology_manager.go:215] "Topology Admit Handler" podUID="f67fa448-2a4c-4ead-ad79-cb799abf6b94" podNamespace="default" podName="busybox"
	Nov 24 03:13:17 old-k8s-version-838815 kubelet[1523]: I1124 03:13:17.235059    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvdl\" (UniqueName: \"kubernetes.io/projected/f67fa448-2a4c-4ead-ad79-cb799abf6b94-kube-api-access-9mvdl\") pod \"busybox\" (UID: \"f67fa448-2a4c-4ead-ad79-cb799abf6b94\") " pod="default/busybox"
	Nov 24 03:13:20 old-k8s-version-838815 kubelet[1523]: I1124 03:13:20.454459    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.195967127 podCreationTimestamp="2025-11-24 03:13:17 +0000 UTC" firstStartedPulling="2025-11-24 03:13:17.505654488 +0000 UTC m=+30.271450053" lastFinishedPulling="2025-11-24 03:13:19.764086915 +0000 UTC m=+32.529882485" observedRunningTime="2025-11-24 03:13:20.452845806 +0000 UTC m=+33.218641377" watchObservedRunningTime="2025-11-24 03:13:20.454399559 +0000 UTC m=+33.220195129"
	
	
	==> storage-provisioner [d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770] <==
	I1124 03:13:14.401726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:13:14.409603       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:13:14.409653       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 03:13:14.416899       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:13:14.416954       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"84f2f943-abac-4b6b-b258-36c08e0eed36", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-838815_34aca47d-4aa6-4ff0-b36d-a38a165c6a26 became leader
	I1124 03:13:14.417022       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-838815_34aca47d-4aa6-4ff0-b36d-a38a165c6a26!
	I1124 03:13:14.517504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-838815_34aca47d-4aa6-4ff0-b36d-a38a165c6a26!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838815 -n old-k8s-version-838815
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-838815 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-838815
helpers_test.go:243: (dbg) docker inspect old-k8s-version-838815:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454",
	        "Created": "2025-11-24T03:12:31.944882861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248202,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:12:31.98793006Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454/hostname",
	        "HostsPath": "/var/lib/docker/containers/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454/hosts",
	        "LogPath": "/var/lib/docker/containers/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454/e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454-json.log",
	        "Name": "/old-k8s-version-838815",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-838815:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-838815",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e16c6bbd00c34e6902ddf1c35ce247e79d6cbe57413339cdb750be20c6dc7454",
	                "LowerDir": "/var/lib/docker/overlay2/7e8afc31b2aacaf13a24d2863b14f38631ffaeb5491173a1dfea41c472591119-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e8afc31b2aacaf13a24d2863b14f38631ffaeb5491173a1dfea41c472591119/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e8afc31b2aacaf13a24d2863b14f38631ffaeb5491173a1dfea41c472591119/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e8afc31b2aacaf13a24d2863b14f38631ffaeb5491173a1dfea41c472591119/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-838815",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-838815/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-838815",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-838815",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-838815",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "46af0e078d7cd48693c4e170b95a2847ff848847fb3f7442eb76e8b3ddca1a8c",
	            "SandboxKey": "/var/run/docker/netns/46af0e078d7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-838815": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1d770a7414674547bb7a07352cd1f0600b0bb79b425bd0f5ea101ba2a99a33",
	                    "EndpointID": "fdc1dc1f92c27f97b6bdf1fc3d0aab0d23bf492eb0e6756f9ee39200b64fa8d1",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "82:72:23:e5:41:46",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-838815",
	                        "e16c6bbd00c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838815 -n old-k8s-version-838815
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-838815 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-838815 logs -n 25: (1.121788407s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-682898 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo docker system info                                                                                                                                                                                                            │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ start   │ -p NoKubernetes-502612 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ ssh     │ -p cilium-682898 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo containerd config dump                                                                                                                                                                                                        │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo crio config                                                                                                                                                                                                                   │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p cilium-682898                                                                                                                                                                                                                                    │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:13 UTC │
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ stop    │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p NoKubernetes-502612 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-182765      │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:13:27
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:13:27.919260  256790 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:27.919376  256790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:27.919385  256790 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:27.919389  256790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:27.919597  256790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:13:27.920119  256790 out.go:368] Setting JSON to false
	I1124 03:13:27.921243  256790 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3351,"bootTime":1763950657,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:13:27.921298  256790 start.go:143] virtualization: kvm guest
	I1124 03:13:27.923013  256790 out.go:179] * [no-preload-182765] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:13:27.924159  256790 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:13:27.924172  256790 notify.go:221] Checking for updates...
	I1124 03:13:27.926146  256790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:13:27.927487  256790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:13:27.928761  256790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 03:13:27.933335  256790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:13:27.934558  256790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:13:27.936069  256790 config.go:182] Loaded profile config "cert-expiration-004045": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:13:27.936172  256790 config.go:182] Loaded profile config "kubernetes-upgrade-093930": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:13:27.936280  256790 config.go:182] Loaded profile config "old-k8s-version-838815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:13:27.936409  256790 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:13:27.961440  256790 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:13:27.961568  256790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:28.030834  256790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 03:13:28.018240296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:28.030968  256790 docker.go:319] overlay module found
	I1124 03:13:28.032512  256790 out.go:179] * Using the docker driver based on user configuration
	I1124 03:13:28.033993  256790 start.go:309] selected driver: docker
	I1124 03:13:28.034010  256790 start.go:927] validating driver "docker" against <nil>
	I1124 03:13:28.034021  256790 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:13:28.034716  256790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:28.098509  256790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 03:13:28.085617764 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:28.098765  256790 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:13:28.099085  256790 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:28.100734  256790 out.go:179] * Using Docker driver with root privileges
	I1124 03:13:28.101989  256790 cni.go:84] Creating CNI manager for ""
	I1124 03:13:28.102081  256790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:28.102093  256790 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:13:28.102166  256790 start.go:353] cluster config:
	{Name:no-preload-182765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-182765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:28.103627  256790 out.go:179] * Starting "no-preload-182765" primary control-plane node in "no-preload-182765" cluster
	I1124 03:13:28.104737  256790 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:13:28.105900  256790 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:13:28.106893  256790 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:13:28.106969  256790 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:13:28.107049  256790 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/config.json ...
	I1124 03:13:28.107092  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/config.json: {Name:mkc14f063f0d4024ea9e3114ad1144d84391f0ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:28.107184  256790 cache.go:107] acquiring lock: {Name:mkeca260fc601fb1525b827cd530c8bf9ce6920e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:28.107234  256790 cache.go:107] acquiring lock: {Name:mk4930fa0e0560379a8a4572e4baba2301ba4df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:28.107246  256790 cache.go:107] acquiring lock: {Name:mk1a7191061e07cc8403d08dec5700d5dbdba24f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:28.107310  256790 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 03:13:28.107290  256790 cache.go:107] acquiring lock: {Name:mkbb69f4697eb9454be7eb953ca0048bd6486e3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:28.107301  256790 cache.go:107] acquiring lock: {Name:mk8e701fcc391bc4f10941f03711dd2d7cc920c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:28.107360  256790 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 03:13:28.107378  256790 cache.go:107] acquiring lock: {Name:mk3a0f57d20d2c94ee1c224a59bcee1acb10ac9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:28.107386  256790 cache.go:107] acquiring lock: {Name:mk645abd1173a451f533e3de4486737553407d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:28.107426  256790 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 03:13:28.107464  256790 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 03:13:28.107526  256790 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 03:13:28.107401  256790 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 03:13:28.107668  256790 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 03:13:28.107213  256790 cache.go:107] acquiring lock: {Name:mk59f35a7b0689282a4dcb9765a6bfc43af4f41f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:28.107752  256790 cache.go:115] /home/jenkins/minikube-integration/21975-4883/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 03:13:28.107763  256790 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21975-4883/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 557.36µs
	I1124 03:13:28.107790  256790 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21975-4883/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 03:13:28.109144  256790 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 03:13:28.109265  256790 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 03:13:28.109144  256790 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 03:13:28.109155  256790 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 03:13:28.109135  256790 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 03:13:28.109151  256790 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 03:13:28.109612  256790 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 03:13:28.130384  256790 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:13:28.130402  256790 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:13:28.130421  256790 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:13:28.130454  256790 start.go:360] acquireMachinesLock for no-preload-182765: {Name:mk2a4f48e358b23f343e68b2ad7294e96541f8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:28.130566  256790 start.go:364] duration metric: took 94.408µs to acquireMachinesLock for "no-preload-182765"
	I1124 03:13:28.130592  256790 start.go:93] Provisioning new machine with config: &{Name:no-preload-182765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-182765 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:13:28.130683  256790 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	050a9d62b1fae       56cc512116c8f       10 seconds ago      Running             busybox                   0                   92cf5351ffb4f       busybox                                          default
	9c967be134687       ead0a4a53df89       15 seconds ago      Running             coredns                   0                   bff0a5e0d7385       coredns-5dd5756b68-gfsqm                         kube-system
	d417c8d3e5028       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   ea1c2bcd52579       storage-provisioner                              kube-system
	da6efdd3aa62d       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   eb624250bfd1b       kindnet-rvm46                                    kube-system
	5252475449db6       ea1030da44aa1       29 seconds ago      Running             kube-proxy                0                   78911a1c265aa       kube-proxy-cz68g                                 kube-system
	ba673dc701109       73deb9a3f7025       47 seconds ago      Running             etcd                      0                   ad0d441d46554       etcd-old-k8s-version-838815                      kube-system
	6d5b31c71edc4       bb5e0dde9054c       47 seconds ago      Running             kube-apiserver            0                   4c1c4c6ae28a1       kube-apiserver-old-k8s-version-838815            kube-system
	6d6e12d242d5e       f6f496300a2ae       47 seconds ago      Running             kube-scheduler            0                   6a18181f2c948       kube-scheduler-old-k8s-version-838815            kube-system
	f861f902328c3       4be79c38a4bab       47 seconds ago      Running             kube-controller-manager   0                   50cddfad68062       kube-controller-manager-old-k8s-version-838815   kube-system
	
	
	==> containerd <==
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.343520631Z" level=info msg="Container 9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.343525065Z" level=info msg="StartContainer for \"d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770\""
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.344623873Z" level=info msg="connecting to shim d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770" address="unix:///run/containerd/s/672aa13e022fadb35f4b054f5001b401a61146b514b12ffda526019465039f4c" protocol=ttrpc version=3
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.349932935Z" level=info msg="CreateContainer within sandbox \"bff0a5e0d7385183b5cd063a7ee6b2d0c23136b1879333f731dc62113d829a90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39\""
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.350531386Z" level=info msg="StartContainer for \"9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39\""
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.351524659Z" level=info msg="connecting to shim 9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39" address="unix:///run/containerd/s/5120a4bf9e25d7330f4a14e0d710619beecdcf6f370be50e3cd3a2ea9899d637" protocol=ttrpc version=3
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.393348174Z" level=info msg="StartContainer for \"d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770\" returns successfully"
	Nov 24 03:13:14 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:14.393636097Z" level=info msg="StartContainer for \"9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39\" returns successfully"
	Nov 24 03:13:17 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:17.386486264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f67fa448-2a4c-4ead-ad79-cb799abf6b94,Namespace:default,Attempt:0,}"
	Nov 24 03:13:17 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:17.430863676Z" level=info msg="connecting to shim 92cf5351ffb4f515af1b587d1f0fee9a7329fb98ed2b7cafd752a27fe2a38ba8" address="unix:///run/containerd/s/d5e15bc8d9c15638c6cf71bda15dfc05e7182e981b25fed11ce6e2e1d044487e" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:13:17 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:17.504184159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f67fa448-2a4c-4ead-ad79-cb799abf6b94,Namespace:default,Attempt:0,} returns sandbox id \"92cf5351ffb4f515af1b587d1f0fee9a7329fb98ed2b7cafd752a27fe2a38ba8\""
	Nov 24 03:13:17 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:17.506016880Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.758901709Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.759824351Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.761067284Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.763272566Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.763722978Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.257658472s"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.763763894Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.765696940Z" level=info msg="CreateContainer within sandbox \"92cf5351ffb4f515af1b587d1f0fee9a7329fb98ed2b7cafd752a27fe2a38ba8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.773837124Z" level=info msg="Container 050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.779819048Z" level=info msg="CreateContainer within sandbox \"92cf5351ffb4f515af1b587d1f0fee9a7329fb98ed2b7cafd752a27fe2a38ba8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283\""
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.780433197Z" level=info msg="StartContainer for \"050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283\""
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.781242290Z" level=info msg="connecting to shim 050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283" address="unix:///run/containerd/s/d5e15bc8d9c15638c6cf71bda15dfc05e7182e981b25fed11ce6e2e1d044487e" protocol=ttrpc version=3
	Nov 24 03:13:19 old-k8s-version-838815 containerd[662]: time="2025-11-24T03:13:19.830895969Z" level=info msg="StartContainer for \"050a9d62b1fae9a40cf6f3ad4abba01d04b1614b159ee822e5eda885e9338283\" returns successfully"
	Nov 24 03:13:27 old-k8s-version-838815 containerd[662]: E1124 03:13:27.152955     662 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50204 - 39212 "HINFO IN 6376129420371334241.7776922599207551710. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030113409s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-838815
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-838815
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-838815
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_12_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:12:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-838815
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:13:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:13:18 +0000   Mon, 24 Nov 2025 03:12:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:13:18 +0000   Mon, 24 Nov 2025 03:12:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:13:18 +0000   Mon, 24 Nov 2025 03:12:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:13:18 +0000   Mon, 24 Nov 2025 03:13:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-838815
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                ab7830c7-5c46-485b-9e98-41065a0d51fb
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-gfsqm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-old-k8s-version-838815                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-rvm46                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-838815             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-838815    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-cz68g                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-838815             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  43s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s   kubelet          Node old-k8s-version-838815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s   kubelet          Node old-k8s-version-838815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s   kubelet          Node old-k8s-version-838815 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node old-k8s-version-838815 event: Registered Node old-k8s-version-838815 in Controller
	  Normal  NodeReady                17s   kubelet          Node old-k8s-version-838815 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ba673dc701109bf125ff9985c0914f2ba2109e73d86e870cceda5494df539e38] <==
	{"level":"info","ts":"2025-11-24T03:12:42.539734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-24T03:12:42.53991Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-24T03:12:42.541913Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T03:12:42.542035Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-24T03:12:42.54215Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-24T03:12:42.542278Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T03:12:42.542332Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T03:12:42.731051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T03:12:42.731099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T03:12:42.73113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-24T03:12:42.731149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T03:12:42.731157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T03:12:42.731169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-24T03:12:42.731179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T03:12:42.731992Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-838815 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T03:12:42.732033Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:12:42.732111Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:12:42.732283Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T03:12:42.732858Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T03:12:42.732376Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:12:42.733679Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:12:42.733915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T03:12:42.733946Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:12:42.733991Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:12:42.734198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 03:13:30 up 55 min,  0 user,  load average: 2.49, 2.84, 1.90
	Linux old-k8s-version-838815 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [da6efdd3aa62d69f1d169afe237a09597925d965af4ae63cb4a3d5c4fdec4a9e] <==
	I1124 03:13:03.628592       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:13:03.628864       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:13:03.629021       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:13:03.629037       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:13:03.629056       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:13:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:13:03.923696       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:13:03.923812       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:13:03.923827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:13:03.928287       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:13:04.323206       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:13:04.323241       1 metrics.go:72] Registering metrics
	I1124 03:13:04.323390       1 controller.go:711] "Syncing nftables rules"
	I1124 03:13:13.834505       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:13:13.834577       1 main.go:301] handling current node
	I1124 03:13:23.831969       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:13:23.832009       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d5b31c71edc46daad185ace0e1d3f5ec67dd2787b6d503af150ed6b776dd725] <==
	I1124 03:12:44.304286       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:12:44.304446       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 03:12:44.304500       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 03:12:44.305715       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 03:12:44.311300       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1124 03:12:44.311418       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 03:12:44.312309       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 03:12:44.345076       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:12:44.352770       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 03:12:45.215593       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:12:45.220057       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:12:45.220077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:12:45.621298       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:12:45.655716       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:12:45.718146       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:12:45.723374       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 03:12:45.724339       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 03:12:45.728024       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:12:46.249563       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 03:12:47.261089       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 03:12:47.272494       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:12:47.283453       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 03:12:59.912069       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:12:59.912147       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:12:59.957889       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f861f902328c35216c5237199b026c1c5955de0259a65cb749000ef69844ea95] <==
	I1124 03:12:59.302029       1 shared_informer.go:318] Caches are synced for endpoint
	I1124 03:12:59.305619       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 03:12:59.622847       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:12:59.652268       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:12:59.652302       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 03:12:59.923016       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rvm46"
	I1124 03:12:59.925308       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cz68g"
	I1124 03:12:59.962166       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 03:13:00.115059       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6vrfh"
	I1124 03:13:00.122441       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gfsqm"
	I1124 03:13:00.129502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="167.92642ms"
	I1124 03:13:00.141994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.428013ms"
	I1124 03:13:00.142125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.703µs"
	I1124 03:13:00.142552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.68µs"
	I1124 03:13:00.244953       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 03:13:00.260184       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-6vrfh"
	I1124 03:13:00.266202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.285563ms"
	I1124 03:13:00.270756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="4.497899ms"
	I1124 03:13:00.270915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.256µs"
	I1124 03:13:13.921589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.633µs"
	I1124 03:13:13.933090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.668µs"
	I1124 03:13:14.064561       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1124 03:13:14.436955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="172.739µs"
	I1124 03:13:15.447731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.036467ms"
	I1124 03:13:15.447889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.488µs"
	
	
	==> kube-proxy [5252475449db61ed023b07a2c7783bea6f77e7aad8afe357a282907f58383b49] <==
	I1124 03:13:00.553814       1 server_others.go:69] "Using iptables proxy"
	I1124 03:13:00.562245       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1124 03:13:00.583877       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:13:00.586348       1 server_others.go:152] "Using iptables Proxier"
	I1124 03:13:00.586605       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 03:13:00.586633       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 03:13:00.586674       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 03:13:00.587025       1 server.go:846] "Version info" version="v1.28.0"
	I1124 03:13:00.587044       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:13:00.588214       1 config.go:188] "Starting service config controller"
	I1124 03:13:00.588236       1 config.go:97] "Starting endpoint slice config controller"
	I1124 03:13:00.588269       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 03:13:00.588256       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 03:13:00.588337       1 config.go:315] "Starting node config controller"
	I1124 03:13:00.588346       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 03:13:00.689029       1 shared_informer.go:318] Caches are synced for service config
	I1124 03:13:00.689045       1 shared_informer.go:318] Caches are synced for node config
	I1124 03:13:00.689074       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6d6e12d242d5e9f46758e6fc6e8d424eb9bd8d2f091a9c6be9a834d07c08f917] <==
	W1124 03:12:44.275669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 03:12:44.275724       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 03:12:44.275599       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1124 03:12:44.275805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1124 03:12:44.275845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 03:12:44.276206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 03:12:44.275849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 03:12:44.276368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1124 03:12:45.097029       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 03:12:45.097062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 03:12:45.129902       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 03:12:45.129937       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:12:45.142527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 03:12:45.142564       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 03:12:45.259310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 03:12:45.259350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 03:12:45.333621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 03:12:45.333668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 03:12:45.363066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 03:12:45.363103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 03:12:45.377571       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 03:12:45.377612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 03:12:45.447921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 03:12:45.447969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1124 03:12:47.670326       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 03:12:59 old-k8s-version-838815 kubelet[1523]: I1124 03:12:59.091469    1523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:12:59 old-k8s-version-838815 kubelet[1523]: I1124 03:12:59.936533    1523 topology_manager.go:215] "Topology Admit Handler" podUID="f375e199-56a3-44e4-97fb-08f38dc56b33" podNamespace="kube-system" podName="kindnet-rvm46"
	Nov 24 03:12:59 old-k8s-version-838815 kubelet[1523]: I1124 03:12:59.936698    1523 topology_manager.go:215] "Topology Admit Handler" podUID="d975541d-c6d9-4d84-8dc6-4ee5db7a575f" podNamespace="kube-system" podName="kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111239    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d975541d-c6d9-4d84-8dc6-4ee5db7a575f-kube-proxy\") pod \"kube-proxy-cz68g\" (UID: \"d975541d-c6d9-4d84-8dc6-4ee5db7a575f\") " pod="kube-system/kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111293    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d975541d-c6d9-4d84-8dc6-4ee5db7a575f-xtables-lock\") pod \"kube-proxy-cz68g\" (UID: \"d975541d-c6d9-4d84-8dc6-4ee5db7a575f\") " pod="kube-system/kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111322    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f375e199-56a3-44e4-97fb-08f38dc56b33-xtables-lock\") pod \"kindnet-rvm46\" (UID: \"f375e199-56a3-44e4-97fb-08f38dc56b33\") " pod="kube-system/kindnet-rvm46"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111353    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d975541d-c6d9-4d84-8dc6-4ee5db7a575f-lib-modules\") pod \"kube-proxy-cz68g\" (UID: \"d975541d-c6d9-4d84-8dc6-4ee5db7a575f\") " pod="kube-system/kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111414    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzv5\" (UniqueName: \"kubernetes.io/projected/d975541d-c6d9-4d84-8dc6-4ee5db7a575f-kube-api-access-jmzv5\") pod \"kube-proxy-cz68g\" (UID: \"d975541d-c6d9-4d84-8dc6-4ee5db7a575f\") " pod="kube-system/kube-proxy-cz68g"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111474    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f375e199-56a3-44e4-97fb-08f38dc56b33-cni-cfg\") pod \"kindnet-rvm46\" (UID: \"f375e199-56a3-44e4-97fb-08f38dc56b33\") " pod="kube-system/kindnet-rvm46"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111519    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f375e199-56a3-44e4-97fb-08f38dc56b33-lib-modules\") pod \"kindnet-rvm46\" (UID: \"f375e199-56a3-44e4-97fb-08f38dc56b33\") " pod="kube-system/kindnet-rvm46"
	Nov 24 03:13:00 old-k8s-version-838815 kubelet[1523]: I1124 03:13:00.111547    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngr5\" (UniqueName: \"kubernetes.io/projected/f375e199-56a3-44e4-97fb-08f38dc56b33-kube-api-access-lngr5\") pod \"kindnet-rvm46\" (UID: \"f375e199-56a3-44e4-97fb-08f38dc56b33\") " pod="kube-system/kindnet-rvm46"
	Nov 24 03:13:04 old-k8s-version-838815 kubelet[1523]: I1124 03:13:04.410053    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cz68g" podStartSLOduration=5.409992605 podCreationTimestamp="2025-11-24 03:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:01.402594532 +0000 UTC m=+14.168390102" watchObservedRunningTime="2025-11-24 03:13:04.409992605 +0000 UTC m=+17.175788176"
	Nov 24 03:13:04 old-k8s-version-838815 kubelet[1523]: I1124 03:13:04.410389    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-rvm46" podStartSLOduration=2.741217364 podCreationTimestamp="2025-11-24 03:12:59 +0000 UTC" firstStartedPulling="2025-11-24 03:13:00.64654656 +0000 UTC m=+13.412342123" lastFinishedPulling="2025-11-24 03:13:03.315690251 +0000 UTC m=+16.081485812" observedRunningTime="2025-11-24 03:13:04.409957816 +0000 UTC m=+17.175753387" watchObservedRunningTime="2025-11-24 03:13:04.410361053 +0000 UTC m=+17.176156622"
	Nov 24 03:13:13 old-k8s-version-838815 kubelet[1523]: I1124 03:13:13.900080    1523 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 03:13:13 old-k8s-version-838815 kubelet[1523]: I1124 03:13:13.921724    1523 topology_manager.go:215] "Topology Admit Handler" podUID="afa1f94c-8c55-4847-9152-189f27ff812a" podNamespace="kube-system" podName="coredns-5dd5756b68-gfsqm"
	Nov 24 03:13:13 old-k8s-version-838815 kubelet[1523]: I1124 03:13:13.923720    1523 topology_manager.go:215] "Topology Admit Handler" podUID="1dc12010-009c-4a23-af68-7bbba3679259" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.117227    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkq7m\" (UniqueName: \"kubernetes.io/projected/afa1f94c-8c55-4847-9152-189f27ff812a-kube-api-access-lkq7m\") pod \"coredns-5dd5756b68-gfsqm\" (UID: \"afa1f94c-8c55-4847-9152-189f27ff812a\") " pod="kube-system/coredns-5dd5756b68-gfsqm"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.117284    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afa1f94c-8c55-4847-9152-189f27ff812a-config-volume\") pod \"coredns-5dd5756b68-gfsqm\" (UID: \"afa1f94c-8c55-4847-9152-189f27ff812a\") " pod="kube-system/coredns-5dd5756b68-gfsqm"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.117367    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ct7h\" (UniqueName: \"kubernetes.io/projected/1dc12010-009c-4a23-af68-7bbba3679259-kube-api-access-9ct7h\") pod \"storage-provisioner\" (UID: \"1dc12010-009c-4a23-af68-7bbba3679259\") " pod="kube-system/storage-provisioner"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.117505    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1dc12010-009c-4a23-af68-7bbba3679259-tmp\") pod \"storage-provisioner\" (UID: \"1dc12010-009c-4a23-af68-7bbba3679259\") " pod="kube-system/storage-provisioner"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.436733    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gfsqm" podStartSLOduration=14.436682129 podCreationTimestamp="2025-11-24 03:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:14.436477472 +0000 UTC m=+27.202273043" watchObservedRunningTime="2025-11-24 03:13:14.436682129 +0000 UTC m=+27.202477697"
	Nov 24 03:13:14 old-k8s-version-838815 kubelet[1523]: I1124 03:13:14.446000    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.445945188 podCreationTimestamp="2025-11-24 03:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:14.44580865 +0000 UTC m=+27.211604213" watchObservedRunningTime="2025-11-24 03:13:14.445945188 +0000 UTC m=+27.211740758"
	Nov 24 03:13:17 old-k8s-version-838815 kubelet[1523]: I1124 03:13:17.075828    1523 topology_manager.go:215] "Topology Admit Handler" podUID="f67fa448-2a4c-4ead-ad79-cb799abf6b94" podNamespace="default" podName="busybox"
	Nov 24 03:13:17 old-k8s-version-838815 kubelet[1523]: I1124 03:13:17.235059    1523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvdl\" (UniqueName: \"kubernetes.io/projected/f67fa448-2a4c-4ead-ad79-cb799abf6b94-kube-api-access-9mvdl\") pod \"busybox\" (UID: \"f67fa448-2a4c-4ead-ad79-cb799abf6b94\") " pod="default/busybox"
	Nov 24 03:13:20 old-k8s-version-838815 kubelet[1523]: I1124 03:13:20.454459    1523 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.195967127 podCreationTimestamp="2025-11-24 03:13:17 +0000 UTC" firstStartedPulling="2025-11-24 03:13:17.505654488 +0000 UTC m=+30.271450053" lastFinishedPulling="2025-11-24 03:13:19.764086915 +0000 UTC m=+32.529882485" observedRunningTime="2025-11-24 03:13:20.452845806 +0000 UTC m=+33.218641377" watchObservedRunningTime="2025-11-24 03:13:20.454399559 +0000 UTC m=+33.220195129"
	
	
	==> storage-provisioner [d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770] <==
	I1124 03:13:14.401726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:13:14.409603       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:13:14.409653       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 03:13:14.416899       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:13:14.416954       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"84f2f943-abac-4b6b-b258-36c08e0eed36", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-838815_34aca47d-4aa6-4ff0-b36d-a38a165c6a26 became leader
	I1124 03:13:14.417022       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-838815_34aca47d-4aa6-4ff0-b36d-a38a165c6a26!
	I1124 03:13:14.517504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-838815_34aca47d-4aa6-4ff0-b36d-a38a165c6a26!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838815 -n old-k8s-version-838815
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-838815 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (14.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-182765 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cf658218-2786-43b2-a609-0e21c6244867] Pending
helpers_test.go:352: "busybox" [cf658218-2786-43b2-a609-0e21c6244867] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cf658218-2786-43b2-a609-0e21c6244867] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00391059s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-182765 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-182765
helpers_test.go:243: (dbg) docker inspect no-preload-182765:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880",
	        "Created": "2025-11-24T03:13:28.878660504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257533,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:13:28.922498494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880/hosts",
	        "LogPath": "/var/lib/docker/containers/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880-json.log",
	        "Name": "/no-preload-182765",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-182765:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-182765",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880",
	                "LowerDir": "/var/lib/docker/overlay2/5b3cd16322ccef02ae6a882d84c589ac763afc9604c420b3747093b3ecd2eddd-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b3cd16322ccef02ae6a882d84c589ac763afc9604c420b3747093b3ecd2eddd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b3cd16322ccef02ae6a882d84c589ac763afc9604c420b3747093b3ecd2eddd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b3cd16322ccef02ae6a882d84c589ac763afc9604c420b3747093b3ecd2eddd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-182765",
	                "Source": "/var/lib/docker/volumes/no-preload-182765/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-182765",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-182765",
	                "name.minikube.sigs.k8s.io": "no-preload-182765",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ab2b2ed6b1f842385b05cc9590337eeded4e73a971730d3cf9b9594009bfef09",
	            "SandboxKey": "/var/run/docker/netns/ab2b2ed6b1f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-182765": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e3f4179ae31456aea033a2bb15d23923301eed3e80090edbf7ca8514d0dcff5",
	                    "EndpointID": "4ac9c9278eab5dbe766429379a42e25b92f182e29063d12f5875257cb9ba99cc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ae:69:18:c9:42:43",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-182765",
	                        "7a0eb0a9c43e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-182765 -n no-preload-182765
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-182765 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-182765 logs -n 25: (1.079907066s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-682898 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo containerd config dump                                                                                                                                                                                                        │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo crio config                                                                                                                                                                                                                   │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p cilium-682898                                                                                                                                                                                                                                    │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:13 UTC │
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ stop    │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p NoKubernetes-502612 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-182765      │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-838815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ stop    │ -p old-k8s-version-838815 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-838815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:13:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:13:45.063573  261872 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:45.063693  261872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:45.063705  261872 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:45.063709  261872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:45.063942  261872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:13:45.064411  261872 out.go:368] Setting JSON to false
	I1124 03:13:45.065542  261872 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3368,"bootTime":1763950657,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:13:45.065595  261872 start.go:143] virtualization: kvm guest
	I1124 03:13:45.067548  261872 out.go:179] * [old-k8s-version-838815] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:13:45.068712  261872 notify.go:221] Checking for updates...
	I1124 03:13:45.068742  261872 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:13:45.070032  261872 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:13:45.071265  261872 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:13:45.072490  261872 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 03:13:45.073808  261872 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:13:45.075093  261872 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:13:45.076623  261872 config.go:182] Loaded profile config "old-k8s-version-838815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:13:45.078393  261872 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1124 03:13:45.079510  261872 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:13:45.104663  261872 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:13:45.104768  261872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:45.164545  261872 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-24 03:13:45.15467142 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:45.164668  261872 docker.go:319] overlay module found
	I1124 03:13:45.167182  261872 out.go:179] * Using the docker driver based on existing profile
	I1124 03:13:45.168219  261872 start.go:309] selected driver: docker
	I1124 03:13:45.168233  261872 start.go:927] validating driver "docker" against &{Name:old-k8s-version-838815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:45.168316  261872 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:13:45.168853  261872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:45.229002  261872 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-24 03:13:45.218604033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:45.229294  261872 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:45.229329  261872 cni.go:84] Creating CNI manager for ""
	I1124 03:13:45.229391  261872 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:45.229434  261872 start.go:353] cluster config:
	{Name:old-k8s-version-838815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:45.231198  261872 out.go:179] * Starting "old-k8s-version-838815" primary control-plane node in "old-k8s-version-838815" cluster
	I1124 03:13:45.232502  261872 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:13:45.233810  261872 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:13:45.234991  261872 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:13:45.235026  261872 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1124 03:13:45.235032  261872 cache.go:65] Caching tarball of preloaded images
	I1124 03:13:45.235070  261872 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:13:45.235114  261872 preload.go:238] Found /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 03:13:45.235126  261872 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1124 03:13:45.235252  261872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/config.json ...
	I1124 03:13:45.255571  261872 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:13:45.255589  261872 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:13:45.255605  261872 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:13:45.255644  261872 start.go:360] acquireMachinesLock for old-k8s-version-838815: {Name:mk8b693c5097c108d6caf8578d5d3410ead3ca46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:45.255709  261872 start.go:364] duration metric: took 42.605µs to acquireMachinesLock for "old-k8s-version-838815"
	I1124 03:13:45.255731  261872 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:13:45.255740  261872 fix.go:54] fixHost starting: 
	I1124 03:13:45.255971  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:45.273238  261872 fix.go:112] recreateIfNeeded on old-k8s-version-838815: state=Stopped err=<nil>
	W1124 03:13:45.273266  261872 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:13:42.782440  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:42.782910  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:42.782966  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:42.783023  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:42.815947  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:42.815972  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:42.815978  222154 cri.go:89] found id: ""
	I1124 03:13:42.815988  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:42.816048  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.821068  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.825377  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:42.825439  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:42.857111  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:42.857130  222154 cri.go:89] found id: ""
	I1124 03:13:42.857140  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:42.857196  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.862037  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:42.862106  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:42.894686  222154 cri.go:89] found id: ""
	I1124 03:13:42.894714  222154 logs.go:282] 0 containers: []
	W1124 03:13:42.894724  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:42.894731  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:42.894817  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:42.926397  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:42.926419  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:42.926424  222154 cri.go:89] found id: ""
	I1124 03:13:42.926434  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:42.926490  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.931201  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.935486  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:42.935550  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:42.968690  222154 cri.go:89] found id: ""
	I1124 03:13:42.968725  222154 logs.go:282] 0 containers: []
	W1124 03:13:42.968736  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:42.968744  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:42.968831  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:43.001388  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:43.001409  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:43.001416  222154 cri.go:89] found id: ""
	I1124 03:13:43.001424  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:43.001476  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:43.005816  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:43.010343  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:43.010405  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:43.041179  222154 cri.go:89] found id: ""
	I1124 03:13:43.041206  222154 logs.go:282] 0 containers: []
	W1124 03:13:43.041234  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:43.041243  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:43.041300  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:43.071844  222154 cri.go:89] found id: ""
	I1124 03:13:43.071871  222154 logs.go:282] 0 containers: []
	W1124 03:13:43.071882  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:43.071894  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:43.071907  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:43.182610  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:43.182650  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:43.256109  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:43.256129  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:43.256143  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:43.295130  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:43.295166  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:43.336837  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:43.336877  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:43.375760  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:43.375812  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:43.409160  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:43.409182  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:43.454125  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:43.454159  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:43.472184  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:43.472214  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:43.536093  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:43.536127  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:43.578848  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:43.578883  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:43.634581  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:43.634620  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:44.399555  256790 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.136724738s)
	I1124 03:13:44.399580  256790 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-4883/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:13:44.399600  256790 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:13:44.399642  256790 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:13:44.821198  256790 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-4883/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:13:44.821237  256790 cache_images.go:125] Successfully loaded all cached images
	I1124 03:13:44.821243  256790 cache_images.go:94] duration metric: took 10.161497332s to LoadCachedImages
	I1124 03:13:44.821257  256790 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1124 03:13:44.821363  256790 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-182765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-182765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:13:44.821420  256790 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:13:44.851903  256790 cni.go:84] Creating CNI manager for ""
	I1124 03:13:44.851920  256790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:44.851931  256790 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:13:44.851952  256790 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-182765 NodeName:no-preload-182765 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:13:44.852066  256790 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-182765"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:13:44.852118  256790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:13:44.861657  256790 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:13:44.861719  256790 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:13:44.869963  256790 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 03:13:44.870050  256790 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:13:44.870081  256790 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 03:13:44.870164  256790 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 03:13:44.873929  256790 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:13:44.873957  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 03:13:45.755081  256790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:45.769749  256790 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:13:45.773873  256790 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:13:45.773907  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 03:13:45.870033  256790 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:13:45.876220  256790 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:13:45.876253  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 03:13:46.121148  256790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:13:46.128974  256790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1124 03:13:46.142040  256790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:13:46.302008  256790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1124 03:13:46.315155  256790 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:13:46.319270  256790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:46.365839  256790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:46.454310  256790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:46.478140  256790 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765 for IP: 192.168.85.2
	I1124 03:13:46.478161  256790 certs.go:195] generating shared ca certs ...
	I1124 03:13:46.478180  256790 certs.go:227] acquiring lock for ca certs: {Name:mkd28e9f2e8e31fe23d0ba27851eb0df56d94420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.478333  256790 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key
	I1124 03:13:46.478398  256790 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key
	I1124 03:13:46.478412  256790 certs.go:257] generating profile certs ...
	I1124 03:13:46.478485  256790 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.key
	I1124 03:13:46.478501  256790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt with IP's: []
	I1124 03:13:46.646111  256790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt ...
	I1124 03:13:46.646143  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: {Name:mk73539b3f54c1961564b6a79fff2497576cb92b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.646339  256790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.key ...
	I1124 03:13:46.646352  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.key: {Name:mk58ceb1530d77d90debb469585bea533f41da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.646449  256790 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key.cdf44a03
	I1124 03:13:46.646469  256790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt.cdf44a03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:13:46.816691  256790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt.cdf44a03 ...
	I1124 03:13:46.816717  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt.cdf44a03: {Name:mk27d6c8cc3794b3c9d0a9b94e935219741af6b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.816901  256790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key.cdf44a03 ...
	I1124 03:13:46.816918  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key.cdf44a03: {Name:mk1d546ce94c496d8da0bcf0c05eba41706e1518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.817006  256790 certs.go:382] copying /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt.cdf44a03 -> /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt
	I1124 03:13:46.817097  256790 certs.go:386] copying /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key.cdf44a03 -> /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key
	I1124 03:13:46.817157  256790 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.key
	I1124 03:13:46.817178  256790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.crt with IP's: []
	I1124 03:13:46.857388  256790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.crt ...
	I1124 03:13:46.857425  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.crt: {Name:mk44f1fd8866b0e73a0df7a8d224ae9f9cfeb9bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.857606  256790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.key ...
	I1124 03:13:46.857626  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.key: {Name:mk6b674d94e7b9f3efc4ba5a0be39c3c8820e891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.857894  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem (1338 bytes)
	W1124 03:13:46.857957  256790 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429_empty.pem, impossibly tiny 0 bytes
	I1124 03:13:46.857968  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:13:46.858004  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:13:46.858036  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:13:46.858072  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem (1679 bytes)
	I1124 03:13:46.858143  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:13:46.858963  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:13:46.878497  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:13:46.896895  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:13:46.914015  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:13:46.932134  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:13:46.949903  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:13:46.966726  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:13:46.985102  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:13:47.002201  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem --> /usr/share/ca-certificates/8429.pem (1338 bytes)
	I1124 03:13:47.023536  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /usr/share/ca-certificates/84292.pem (1708 bytes)
	I1124 03:13:47.040674  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:13:47.057844  256790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:13:47.069955  256790 ssh_runner.go:195] Run: openssl version
	I1124 03:13:47.076042  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8429.pem && ln -fs /usr/share/ca-certificates/8429.pem /etc/ssl/certs/8429.pem"
	I1124 03:13:47.084314  256790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8429.pem
	I1124 03:13:47.088076  256790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/8429.pem
	I1124 03:13:47.088133  256790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8429.pem
	I1124 03:13:47.122851  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8429.pem /etc/ssl/certs/51391683.0"
	I1124 03:13:47.131586  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84292.pem && ln -fs /usr/share/ca-certificates/84292.pem /etc/ssl/certs/84292.pem"
	I1124 03:13:47.140008  256790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84292.pem
	I1124 03:13:47.143689  256790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/84292.pem
	I1124 03:13:47.143757  256790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84292.pem
	I1124 03:13:47.178455  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:13:47.187236  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:13:47.195760  256790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:47.199811  256790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:47.199865  256790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:47.238221  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:13:47.247116  256790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:13:47.250904  256790 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:13:47.250966  256790 kubeadm.go:401] StartCluster: {Name:no-preload-182765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-182765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:47.251063  256790 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:13:47.251115  256790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:13:47.277332  256790 cri.go:89] found id: ""
	I1124 03:13:47.277405  256790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:13:47.285583  256790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:13:47.293758  256790 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:13:47.293850  256790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:13:47.302034  256790 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:13:47.302058  256790 kubeadm.go:158] found existing configuration files:
	
	I1124 03:13:47.302121  256790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:13:47.310408  256790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:13:47.310462  256790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:13:47.318990  256790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:13:47.327190  256790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:13:47.327239  256790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:13:47.334835  256790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:13:47.342219  256790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:13:47.342274  256790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:13:47.349204  256790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:13:47.356687  256790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:13:47.356732  256790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:13:47.363898  256790 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:13:47.398897  256790 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:13:47.398952  256790 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:13:47.418532  256790 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:13:47.418630  256790 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:13:47.418675  256790 kubeadm.go:319] OS: Linux
	I1124 03:13:47.418732  256790 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:13:47.418805  256790 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:13:47.418868  256790 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:13:47.418933  256790 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:13:47.419002  256790 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:13:47.419073  256790 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:13:47.419156  256790 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:13:47.419255  256790 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:13:47.477815  256790 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:13:47.477986  256790 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:13:47.478155  256790 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:13:47.482588  256790 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:13:47.485410  256790 out.go:252]   - Generating certificates and keys ...
	I1124 03:13:47.485509  256790 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:13:47.485602  256790 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:13:47.512216  256790 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:13:47.791516  256790 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:13:45.275046  261872 out.go:252] * Restarting existing docker container for "old-k8s-version-838815" ...
	I1124 03:13:45.275119  261872 cli_runner.go:164] Run: docker start old-k8s-version-838815
	I1124 03:13:45.620024  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:45.640739  261872 kic.go:430] container "old-k8s-version-838815" state is running.
	I1124 03:13:45.641204  261872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-838815
	I1124 03:13:45.662968  261872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/config.json ...
	I1124 03:13:45.663231  261872 machine.go:94] provisionDockerMachine start ...
	I1124 03:13:45.663313  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:45.683416  261872 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:45.683656  261872 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33072 <nil> <nil>}
	I1124 03:13:45.683670  261872 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:13:45.684408  261872 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44748->127.0.0.1:33072: read: connection reset by peer
	I1124 03:13:48.827423  261872 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-838815
	
	I1124 03:13:48.827455  261872 ubuntu.go:182] provisioning hostname "old-k8s-version-838815"
	I1124 03:13:48.827530  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:48.849134  261872 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:48.849485  261872 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33072 <nil> <nil>}
	I1124 03:13:48.849506  261872 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-838815 && echo "old-k8s-version-838815" | sudo tee /etc/hostname
	I1124 03:13:48.998034  261872 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-838815
	
	I1124 03:13:48.998122  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.017196  261872 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:49.017467  261872 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33072 <nil> <nil>}
	I1124 03:13:49.017485  261872 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-838815' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-838815/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-838815' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:13:49.157389  261872 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:13:49.157422  261872 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 03:13:49.157439  261872 ubuntu.go:190] setting up certificates
	I1124 03:13:49.157459  261872 provision.go:84] configureAuth start
	I1124 03:13:49.157517  261872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-838815
	I1124 03:13:49.175307  261872 provision.go:143] copyHostCerts
	I1124 03:13:49.175364  261872 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 03:13:49.175381  261872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 03:13:49.175448  261872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 03:13:49.175546  261872 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 03:13:49.175564  261872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 03:13:49.175593  261872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 03:13:49.175660  261872 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 03:13:49.175668  261872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 03:13:49.175690  261872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 03:13:49.175751  261872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-838815 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-838815]
	I1124 03:13:49.251404  261872 provision.go:177] copyRemoteCerts
	I1124 03:13:49.251471  261872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:13:49.251502  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.270581  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.370991  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:13:49.388399  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 03:13:49.405594  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:13:49.422948  261872 provision.go:87] duration metric: took 265.476289ms to configureAuth
	I1124 03:13:49.422978  261872 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:13:49.423159  261872 config.go:182] Loaded profile config "old-k8s-version-838815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:13:49.423177  261872 machine.go:97] duration metric: took 3.759931545s to provisionDockerMachine
	I1124 03:13:49.423186  261872 start.go:293] postStartSetup for "old-k8s-version-838815" (driver="docker")
	I1124 03:13:49.423205  261872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:13:49.423257  261872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:13:49.423288  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.442031  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.549181  261872 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:13:49.552968  261872 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:13:49.553001  261872 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:13:49.553014  261872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 03:13:49.553084  261872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 03:13:49.553182  261872 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem -> 84292.pem in /etc/ssl/certs
	I1124 03:13:49.553299  261872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:13:49.562836  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:13:49.582845  261872 start.go:296] duration metric: took 159.631045ms for postStartSetup
	I1124 03:13:49.582937  261872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:13:49.582984  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.603746  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.704223  261872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:13:49.709257  261872 fix.go:56] duration metric: took 4.453511671s for fixHost
	I1124 03:13:49.709278  261872 start.go:83] releasing machines lock for "old-k8s-version-838815", held for 4.453557618s
	I1124 03:13:49.709339  261872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-838815
	I1124 03:13:49.729262  261872 ssh_runner.go:195] Run: cat /version.json
	I1124 03:13:49.729344  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.729357  261872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:13:49.729455  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.750433  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.750853  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.904900  261872 ssh_runner.go:195] Run: systemctl --version
	I1124 03:13:49.912654  261872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:13:49.917879  261872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:13:49.917995  261872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:13:49.926022  261872 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:13:49.926045  261872 start.go:496] detecting cgroup driver to use...
	I1124 03:13:49.926077  261872 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:13:49.926117  261872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:13:49.945996  261872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:13:49.961277  261872 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:13:49.961353  261872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:13:49.978494  261872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:13:49.993905  261872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:13:50.082335  261872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:13:50.180070  261872 docker.go:234] disabling docker service ...
	I1124 03:13:50.180148  261872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:13:50.197022  261872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:13:50.210855  261872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:13:50.296272  261872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:13:50.382358  261872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:13:50.395675  261872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:13:50.409575  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 03:13:50.418375  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:13:50.427099  261872 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 03:13:50.427158  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 03:13:50.435870  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:13:50.444942  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:13:50.453675  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:13:50.462003  261872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:13:50.469918  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:13:50.478445  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:13:50.486727  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:13:50.495415  261872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:13:50.502528  261872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:13:50.509670  261872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:50.589308  261872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:13:50.703385  261872 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:13:50.703462  261872 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:13:50.707755  261872 start.go:564] Will wait 60s for crictl version
	I1124 03:13:50.707827  261872 ssh_runner.go:195] Run: which crictl
	I1124 03:13:50.711579  261872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:13:50.738380  261872 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:13:50.738444  261872 ssh_runner.go:195] Run: containerd --version
	I1124 03:13:50.759417  261872 ssh_runner.go:195] Run: containerd --version
	I1124 03:13:50.783375  261872 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 03:13:46.174035  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:46.174513  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:46.174580  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:46.174641  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:46.202025  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:46.202045  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:46.202049  222154 cri.go:89] found id: ""
	I1124 03:13:46.202056  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:46.202106  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.206050  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.209735  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:46.209834  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:46.238096  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:46.238119  222154 cri.go:89] found id: ""
	I1124 03:13:46.238128  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:46.238199  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.242110  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:46.242175  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:46.267229  222154 cri.go:89] found id: ""
	I1124 03:13:46.267263  222154 logs.go:282] 0 containers: []
	W1124 03:13:46.267270  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:46.267276  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:46.267319  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:46.293274  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:46.293297  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:46.293304  222154 cri.go:89] found id: ""
	I1124 03:13:46.293316  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:46.293374  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.297360  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.301203  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:46.301264  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:46.330284  222154 cri.go:89] found id: ""
	I1124 03:13:46.330304  222154 logs.go:282] 0 containers: []
	W1124 03:13:46.330311  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:46.330320  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:46.330364  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:46.357550  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:46.357569  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:46.357572  222154 cri.go:89] found id: ""
	I1124 03:13:46.357579  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:46.357631  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.361542  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.365711  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:46.365789  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:46.391493  222154 cri.go:89] found id: ""
	I1124 03:13:46.391520  222154 logs.go:282] 0 containers: []
	W1124 03:13:46.391531  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:46.391538  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:46.391600  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:46.422363  222154 cri.go:89] found id: ""
	I1124 03:13:46.422390  222154 logs.go:282] 0 containers: []
	W1124 03:13:46.422398  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:46.422408  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:46.422418  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:46.466844  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:46.466872  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:46.565012  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:46.565044  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:46.600024  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:46.600051  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:46.664261  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:46.664292  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:46.695951  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:46.695980  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:46.724289  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:46.724318  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:46.761731  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:46.761760  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:46.794474  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:46.794500  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:46.808440  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:46.808473  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:46.866985  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:46.867013  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:46.867028  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:46.898817  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:46.898847  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:49.433841  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:49.434243  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:49.434305  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:49.434360  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:49.465869  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:49.465892  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:49.465898  222154 cri.go:89] found id: ""
	I1124 03:13:49.465906  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:49.465956  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.470302  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.474402  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:49.474458  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:49.499866  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:49.499886  222154 cri.go:89] found id: ""
	I1124 03:13:49.499895  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:49.499944  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.503761  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:49.503857  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:49.529482  222154 cri.go:89] found id: ""
	I1124 03:13:49.529509  222154 logs.go:282] 0 containers: []
	W1124 03:13:49.529517  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:49.529523  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:49.529575  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:49.560512  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:49.560535  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:49.560541  222154 cri.go:89] found id: ""
	I1124 03:13:49.560550  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:49.560606  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.565155  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.568964  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:49.569024  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:49.602037  222154 cri.go:89] found id: ""
	I1124 03:13:49.602064  222154 logs.go:282] 0 containers: []
	W1124 03:13:49.602076  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:49.602083  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:49.602136  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:49.630842  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:49.630865  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:49.630871  222154 cri.go:89] found id: ""
	I1124 03:13:49.630880  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:49.630931  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.635044  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.638687  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:49.638741  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:49.665239  222154 cri.go:89] found id: ""
	I1124 03:13:49.665261  222154 logs.go:282] 0 containers: []
	W1124 03:13:49.665269  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:49.665274  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:49.665326  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:49.692995  222154 cri.go:89] found id: ""
	I1124 03:13:49.693017  222154 logs.go:282] 0 containers: []
	W1124 03:13:49.693025  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:49.693035  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:49.693045  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:49.727824  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:49.727851  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:49.800927  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:49.800963  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:49.836038  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:49.836063  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:49.882066  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:49.882105  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:49.914295  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:49.914324  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:49.928107  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:49.928142  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:49.961137  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:49.961167  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:49.997955  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:49.997982  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:50.030344  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:50.030389  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:50.126887  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:50.126920  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:50.194493  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:50.194516  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:50.194531  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:48.089976  256790 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:13:48.671180  256790 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:13:48.847653  256790 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:13:48.847833  256790 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-182765] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:13:49.113359  256790 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:13:49.113541  256790 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-182765] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:13:49.259626  256790 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:13:49.550081  256790 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:13:49.833155  256790 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:13:49.833287  256790 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:13:50.068112  256790 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:13:50.349879  256790 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:13:50.396376  256790 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:13:50.845181  256790 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:13:51.371552  256790 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:13:51.372143  256790 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:13:51.375886  256790 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:13:50.784864  261872 cli_runner.go:164] Run: docker network inspect old-k8s-version-838815 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:13:50.803745  261872 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:13:50.808156  261872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:50.818521  261872 kubeadm.go:884] updating cluster {Name:old-k8s-version-838815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:13:50.818658  261872 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:13:50.818721  261872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:13:50.844347  261872 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:13:50.844369  261872 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:13:50.844426  261872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:13:50.870116  261872 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:13:50.870140  261872 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:13:50.870147  261872 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1124 03:13:50.870271  261872 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-838815 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:13:50.870331  261872 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:13:50.895009  261872 cni.go:84] Creating CNI manager for ""
	I1124 03:13:50.895030  261872 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:50.895042  261872 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:13:50.895061  261872 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-838815 NodeName:old-k8s-version-838815 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:13:50.895166  261872 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-838815"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:13:50.895220  261872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 03:13:50.903262  261872 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:13:50.903330  261872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:13:50.910939  261872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 03:13:50.923552  261872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:13:50.936014  261872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1124 03:13:50.948693  261872 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:13:50.952277  261872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:50.961934  261872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:51.042374  261872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:51.074826  261872 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815 for IP: 192.168.94.2
	I1124 03:13:51.074846  261872 certs.go:195] generating shared ca certs ...
	I1124 03:13:51.074865  261872 certs.go:227] acquiring lock for ca certs: {Name:mkd28e9f2e8e31fe23d0ba27851eb0df56d94420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:51.075047  261872 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key
	I1124 03:13:51.075114  261872 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key
	I1124 03:13:51.075126  261872 certs.go:257] generating profile certs ...
	I1124 03:13:51.075227  261872 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.key
	I1124 03:13:51.075311  261872 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/apiserver.key.1d226222
	I1124 03:13:51.075433  261872 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/proxy-client.key
	I1124 03:13:51.075576  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem (1338 bytes)
	W1124 03:13:51.075619  261872 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429_empty.pem, impossibly tiny 0 bytes
	I1124 03:13:51.075632  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:13:51.075682  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:13:51.075740  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:13:51.075797  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem (1679 bytes)
	I1124 03:13:51.075862  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:13:51.076633  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:13:51.095698  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:13:51.115901  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:13:51.135907  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:13:51.158937  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 03:13:51.181743  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:13:51.201818  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:13:51.221017  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:13:51.238538  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:13:51.256409  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem --> /usr/share/ca-certificates/8429.pem (1338 bytes)
	I1124 03:13:51.274062  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /usr/share/ca-certificates/84292.pem (1708 bytes)
	I1124 03:13:51.294048  261872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:13:51.307329  261872 ssh_runner.go:195] Run: openssl version
	I1124 03:13:51.314329  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8429.pem && ln -fs /usr/share/ca-certificates/8429.pem /etc/ssl/certs/8429.pem"
	I1124 03:13:51.323326  261872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8429.pem
	I1124 03:13:51.327467  261872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/8429.pem
	I1124 03:13:51.327550  261872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8429.pem
	I1124 03:13:51.364023  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8429.pem /etc/ssl/certs/51391683.0"
	I1124 03:13:51.372767  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84292.pem && ln -fs /usr/share/ca-certificates/84292.pem /etc/ssl/certs/84292.pem"
	I1124 03:13:51.382195  261872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84292.pem
	I1124 03:13:51.386199  261872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/84292.pem
	I1124 03:13:51.386248  261872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84292.pem
	I1124 03:13:51.431567  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:13:51.444377  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:13:51.453471  261872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:51.457950  261872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:51.458004  261872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:51.492858  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:13:51.502686  261872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:13:51.506868  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:13:51.546231  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:13:51.581525  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:13:51.625966  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:13:51.677213  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:13:51.733224  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:13:51.785952  261872 kubeadm.go:401] StartCluster: {Name:old-k8s-version-838815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:51.786067  261872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:13:51.786180  261872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:13:51.834133  261872 cri.go:89] found id: "fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e"
	I1124 03:13:51.834152  261872 cri.go:89] found id: "d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87"
	I1124 03:13:51.834170  261872 cri.go:89] found id: "ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15"
	I1124 03:13:51.834174  261872 cri.go:89] found id: "a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe"
	I1124 03:13:51.834176  261872 cri.go:89] found id: "9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39"
	I1124 03:13:51.834190  261872 cri.go:89] found id: "d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770"
	I1124 03:13:51.834193  261872 cri.go:89] found id: "da6efdd3aa62d69f1d169afe237a09597925d965af4ae63cb4a3d5c4fdec4a9e"
	I1124 03:13:51.834196  261872 cri.go:89] found id: "5252475449db61ed023b07a2c7783bea6f77e7aad8afe357a282907f58383b49"
	I1124 03:13:51.834198  261872 cri.go:89] found id: "ba673dc701109bf125ff9985c0914f2ba2109e73d86e870cceda5494df539e38"
	I1124 03:13:51.834205  261872 cri.go:89] found id: "6d5b31c71edc46daad185ace0e1d3f5ec67dd2787b6d503af150ed6b776dd725"
	I1124 03:13:51.834207  261872 cri.go:89] found id: "6d6e12d242d5e9f46758e6fc6e8d424eb9bd8d2f091a9c6be9a834d07c08f917"
	I1124 03:13:51.834209  261872 cri.go:89] found id: "f861f902328c35216c5237199b026c1c5955de0259a65cb749000ef69844ea95"
	I1124 03:13:51.834212  261872 cri.go:89] found id: ""
	I1124 03:13:51.834267  261872 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 03:13:51.865491  261872 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95","pid":808,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95/rootfs","created":"2025-11-24T03:13:51.660061523Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-838815_927cbff391bb332f43f45f26699862ae","io.kubernetes.cri.sandbox-memory":"0","
io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"927cbff391bb332f43f45f26699862ae"},"owner":"root"},{"ociVersion":"1.2.1","id":"a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe","pid":924,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe/rootfs","created":"2025-11-24T03:13:51.777907031Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.c
ri.sandbox-uid":"927cbff391bb332f43f45f26699862ae"},"owner":"root"},{"ociVersion":"1.2.1","id":"aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e","pid":861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e/rootfs","created":"2025-11-24T03:13:51.697714244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-838815_59d28715e65b26ba92b75a322d154274","io.kubernetes.cri.sa
ndbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"59d28715e65b26ba92b75a322d154274"},"owner":"root"},{"ociVersion":"1.2.1","id":"ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15/rootfs","created":"2025-11-24T03:13:51.792630729Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.0","io.kubernetes.cri.sandbox-id":"fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-838815","io.kuber
netes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b037717515dec83b45dc7eca1e2db0bb"},"owner":"root"},{"ociVersion":"1.2.1","id":"d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87","pid":965,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87/rootfs","created":"2025-11-24T03:13:51.810097482Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.0","io.kubernetes.cri.sandbox-id":"aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"59d28715e65b26ba92b75a32
2d154274"},"owner":"root"},{"ociVersion":"1.2.1","id":"fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515","pid":823,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515/rootfs","created":"2025-11-24T03:13:51.662600956Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-838815_b037717515dec83b45dc7eca1e2db0bb","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sand
box-name":"kube-apiserver-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b037717515dec83b45dc7eca1e2db0bb"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e","pid":972,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e/rootfs","created":"2025-11-24T03:13:51.820912641Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.0","io.kubernetes.cri.sandbox-id":"fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-838815","io.kubernetes.cri.sand
box-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9245bbeddbc02ca342af19af610818c6"},"owner":"root"},{"ociVersion":"1.2.1","id":"fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca","pid":863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca/rootfs","created":"2025-11-24T03:13:51.699900586Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-838815_9
245bbeddbc02ca342af19af610818c6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9245bbeddbc02ca342af19af610818c6"},"owner":"root"}]
	I1124 03:13:51.865693  261872 cri.go:126] list returned 8 containers
	I1124 03:13:51.865718  261872 cri.go:129] container: {ID:02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95 Status:running}
	I1124 03:13:51.865741  261872 cri.go:131] skipping 02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95 - not in ps
	I1124 03:13:51.865753  261872 cri.go:129] container: {ID:a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe Status:running}
	I1124 03:13:51.865763  261872 cri.go:135] skipping {a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe running}: state = "running", want "paused"
	I1124 03:13:51.865801  261872 cri.go:129] container: {ID:aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e Status:running}
	I1124 03:13:51.865810  261872 cri.go:131] skipping aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e - not in ps
	I1124 03:13:51.865815  261872 cri.go:129] container: {ID:ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15 Status:running}
	I1124 03:13:51.865822  261872 cri.go:135] skipping {ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15 running}: state = "running", want "paused"
	I1124 03:13:51.865829  261872 cri.go:129] container: {ID:d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87 Status:running}
	I1124 03:13:51.865836  261872 cri.go:135] skipping {d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87 running}: state = "running", want "paused"
	I1124 03:13:51.865842  261872 cri.go:129] container: {ID:fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515 Status:running}
	I1124 03:13:51.865847  261872 cri.go:131] skipping fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515 - not in ps
	I1124 03:13:51.865854  261872 cri.go:129] container: {ID:fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e Status:created}
	I1124 03:13:51.865860  261872 cri.go:135] skipping {fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e created}: state = "created", want "paused"
	I1124 03:13:51.865867  261872 cri.go:129] container: {ID:fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca Status:running}
	I1124 03:13:51.865874  261872 cri.go:131] skipping fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca - not in ps
	I1124 03:13:51.865924  261872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:13:51.877803  261872 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:13:51.877825  261872 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:13:51.877872  261872 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:13:51.887574  261872 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:13:51.888333  261872 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-838815" does not appear in /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:13:51.888824  261872 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-4883/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-838815" cluster setting kubeconfig missing "old-k8s-version-838815" context setting]
	I1124 03:13:51.889590  261872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:51.891510  261872 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:13:51.902318  261872 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 03:13:51.902378  261872 kubeadm.go:602] duration metric: took 24.546025ms to restartPrimaryControlPlane
	I1124 03:13:51.902409  261872 kubeadm.go:403] duration metric: took 116.453218ms to StartCluster
	I1124 03:13:51.902430  261872 settings.go:142] acquiring lock: {Name:mk05d84efd831d60555ea716cd9d2a0a41871249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:51.902506  261872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:13:51.903625  261872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:51.903885  261872 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:13:51.904110  261872 config.go:182] Loaded profile config "old-k8s-version-838815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:13:51.904121  261872 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:13:51.904217  261872 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-838815"
	I1124 03:13:51.904234  261872 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-838815"
	W1124 03:13:51.904243  261872 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:13:51.904243  261872 addons.go:70] Setting dashboard=true in profile "old-k8s-version-838815"
	I1124 03:13:51.904255  261872 addons.go:239] Setting addon dashboard=true in "old-k8s-version-838815"
	W1124 03:13:51.904269  261872 addons.go:248] addon dashboard should already be in state true
	I1124 03:13:51.904292  261872 host.go:66] Checking if "old-k8s-version-838815" exists ...
	I1124 03:13:51.904328  261872 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-838815"
	I1124 03:13:51.904342  261872 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-838815"
	W1124 03:13:51.904349  261872 addons.go:248] addon metrics-server should already be in state true
	I1124 03:13:51.904371  261872 host.go:66] Checking if "old-k8s-version-838815" exists ...
	I1124 03:13:51.904229  261872 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-838815"
	I1124 03:13:51.904445  261872 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-838815"
	I1124 03:13:51.904730  261872 host.go:66] Checking if "old-k8s-version-838815" exists ...
	I1124 03:13:51.904791  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.904866  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.905157  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.905172  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.907972  261872 out.go:179] * Verifying Kubernetes components...
	I1124 03:13:51.911888  261872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:51.934792  261872 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-838815"
	W1124 03:13:51.934822  261872 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:13:51.934852  261872 host.go:66] Checking if "old-k8s-version-838815" exists ...
	I1124 03:13:51.935311  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.940160  261872 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:13:51.940318  261872 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:13:51.941580  261872 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:13:51.941604  261872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:13:51.941654  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:51.941577  261872 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 03:13:51.942826  261872 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 03:13:51.942845  261872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 03:13:51.942915  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:51.943004  261872 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:13:51.377450  256790 out.go:252]   - Booting up control plane ...
	I1124 03:13:51.377575  256790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:13:51.377695  256790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:13:51.378229  256790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:13:51.394496  256790 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:13:51.394675  256790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:13:51.402191  256790 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:13:51.402335  256790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:13:51.402405  256790 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:13:51.509919  256790 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:13:51.510064  256790 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:13:52.511837  256790 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001905438s
	I1124 03:13:52.515714  256790 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:13:52.515880  256790 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:13:52.516030  256790 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:13:52.516146  256790 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:13:51.944113  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:13:51.944128  261872 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:13:51.944190  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:51.965055  261872 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:13:51.965084  261872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:13:51.965149  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:51.983416  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:51.986881  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:51.987120  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:52.009353  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:52.110916  261872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:52.115060  261872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:13:52.130676  261872 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-838815" to be "Ready" ...
	I1124 03:13:52.135712  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:13:52.135735  261872 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:13:52.137139  261872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:13:52.139822  261872 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 03:13:52.139839  261872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 03:13:52.158316  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:13:52.158389  261872 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:13:52.168093  261872 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 03:13:52.168792  261872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 03:13:52.179507  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:13:52.179531  261872 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:13:52.195853  261872 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:13:52.195876  261872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 03:13:52.229574  261872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:13:52.241492  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:13:52.241517  261872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:13:52.268734  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:13:52.268757  261872 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:13:52.284063  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:13:52.284089  261872 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:13:52.300059  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:13:52.300087  261872 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:13:52.319441  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:13:52.319463  261872 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:13:52.333000  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:13:52.333024  261872 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:13:52.347840  261872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:13:54.215012  261872 node_ready.go:49] node "old-k8s-version-838815" is "Ready"
	I1124 03:13:54.215045  261872 node_ready.go:38] duration metric: took 2.084340625s for node "old-k8s-version-838815" to be "Ready" ...
	I1124 03:13:54.215061  261872 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:13:54.215114  261872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:13:55.124393  261872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.009301652s)
	I1124 03:13:55.124478  261872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.987316377s)
	I1124 03:13:55.124539  261872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.894935129s)
	I1124 03:13:55.124565  261872 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-838815"
	I1124 03:13:55.609379  261872 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.394243392s)
	I1124 03:13:55.609422  261872 api_server.go:72] duration metric: took 3.70545074s to wait for apiserver process to appear ...
	I1124 03:13:55.609430  261872 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:13:55.609451  261872 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:13:55.609959  261872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.262059343s)
	I1124 03:13:55.611233  261872 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-838815 addons enable metrics-server
	
	I1124 03:13:55.612916  261872 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1124 03:13:52.729859  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:52.730282  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:52.730330  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:52.730379  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:52.770262  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:52.770287  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:52.770295  222154 cri.go:89] found id: ""
	I1124 03:13:52.770304  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:52.770423  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.776345  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.782454  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:52.782558  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:52.822065  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:52.822089  222154 cri.go:89] found id: ""
	I1124 03:13:52.822108  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:52.822162  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.827509  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:52.827582  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:52.864305  222154 cri.go:89] found id: ""
	I1124 03:13:52.864331  222154 logs.go:282] 0 containers: []
	W1124 03:13:52.864341  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:52.864356  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:52.864413  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:52.899888  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:52.899918  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:52.899926  222154 cri.go:89] found id: ""
	I1124 03:13:52.899936  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:52.900000  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.905926  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.911101  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:52.911273  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:52.945999  222154 cri.go:89] found id: ""
	I1124 03:13:52.946026  222154 logs.go:282] 0 containers: []
	W1124 03:13:52.946036  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:52.946044  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:52.946101  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:52.980910  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:52.980935  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:52.980941  222154 cri.go:89] found id: ""
	I1124 03:13:52.980950  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:52.981009  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.986708  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.991216  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:52.991291  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:53.029761  222154 cri.go:89] found id: ""
	I1124 03:13:53.029808  222154 logs.go:282] 0 containers: []
	W1124 03:13:53.029822  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:53.029830  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:53.029888  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:53.063730  222154 cri.go:89] found id: ""
	I1124 03:13:53.063753  222154 logs.go:282] 0 containers: []
	W1124 03:13:53.063761  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:53.063770  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:53.063794  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:53.097201  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:53.097230  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:53.143200  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:53.143227  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:53.209559  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:53.209596  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:53.262124  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:53.262154  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:53.305178  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:53.305212  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:53.371116  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:53.371155  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:53.410310  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:53.410338  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:53.550704  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:53.550741  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:53.567739  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:53.567801  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:53.645518  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:53.645546  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:53.645561  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:53.693920  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:53.693953  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:55.147244  256790 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.631504123s
	I1124 03:13:55.673486  256790 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.157794998s
	I1124 03:13:57.517908  256790 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002124183s
	I1124 03:13:57.529050  256790 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:13:57.539437  256790 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:13:57.547894  256790 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:13:57.548209  256790 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-182765 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:13:57.554385  256790 kubeadm.go:319] [bootstrap-token] Using token: 4gg6pq.7a7gneeh21qubvs3
	I1124 03:13:57.555734  256790 out.go:252]   - Configuring RBAC rules ...
	I1124 03:13:57.555980  256790 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:13:57.560138  256790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:13:57.565840  256790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:13:57.569730  256790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:13:57.572050  256790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:13:57.574156  256790 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:13:57.923390  256790 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:13:58.339738  256790 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:13:58.923418  256790 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:13:58.926055  256790 kubeadm.go:319] 
	I1124 03:13:58.926165  256790 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:13:58.926175  256790 kubeadm.go:319] 
	I1124 03:13:58.926306  256790 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:13:58.926319  256790 kubeadm.go:319] 
	I1124 03:13:58.926365  256790 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:13:58.926430  256790 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:13:58.926494  256790 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:13:58.926502  256790 kubeadm.go:319] 
	I1124 03:13:58.926557  256790 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:13:58.926567  256790 kubeadm.go:319] 
	I1124 03:13:58.926627  256790 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:13:58.926659  256790 kubeadm.go:319] 
	I1124 03:13:58.926730  256790 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:13:58.926856  256790 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:13:58.926919  256790 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:13:58.926939  256790 kubeadm.go:319] 
	I1124 03:13:58.927057  256790 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:13:58.927163  256790 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:13:58.927172  256790 kubeadm.go:319] 
	I1124 03:13:58.927278  256790 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4gg6pq.7a7gneeh21qubvs3 \
	I1124 03:13:58.927410  256790 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5e943442c508de754e907135e9f68708045a0a18fa82619a148153bf802a361b \
	I1124 03:13:58.927443  256790 kubeadm.go:319] 	--control-plane 
	I1124 03:13:58.927452  256790 kubeadm.go:319] 
	I1124 03:13:58.927565  256790 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:13:58.927575  256790 kubeadm.go:319] 
	I1124 03:13:58.927689  256790 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4gg6pq.7a7gneeh21qubvs3 \
	I1124 03:13:58.927869  256790 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5e943442c508de754e907135e9f68708045a0a18fa82619a148153bf802a361b 
	I1124 03:13:58.930004  256790 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:13:58.930109  256790 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:13:58.930138  256790 cni.go:84] Creating CNI manager for ""
	I1124 03:13:58.930148  256790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:58.932398  256790 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:13:55.614396  261872 addons.go:530] duration metric: took 3.710278081s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1124 03:13:55.615674  261872 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:13:55.617606  261872 api_server.go:141] control plane version: v1.28.0
	I1124 03:13:55.617634  261872 api_server.go:131] duration metric: took 8.19655ms to wait for apiserver health ...
	I1124 03:13:55.617645  261872 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:13:55.623812  261872 system_pods.go:59] 9 kube-system pods found
	I1124 03:13:55.623863  261872 system_pods.go:61] "coredns-5dd5756b68-gfsqm" [afa1f94c-8c55-4847-9152-189f27ff812a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:13:55.623878  261872 system_pods.go:61] "etcd-old-k8s-version-838815" [6bbc2335-d9af-448e-87e7-2179d5b28065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:13:55.623898  261872 system_pods.go:61] "kindnet-rvm46" [f375e199-56a3-44e4-97fb-08f38dc56b33] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:13:55.623914  261872 system_pods.go:61] "kube-apiserver-old-k8s-version-838815" [392c3bef-1022-4055-96e3-cb0a96f804a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:13:55.623933  261872 system_pods.go:61] "kube-controller-manager-old-k8s-version-838815" [73e96a09-3a84-4bb8-8e3c-4c9804d0e497] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:13:55.623948  261872 system_pods.go:61] "kube-proxy-cz68g" [d975541d-c6d9-4d84-8dc6-4ee5db7a575f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:13:55.623956  261872 system_pods.go:61] "kube-scheduler-old-k8s-version-838815" [065763c2-fe08-4d07-9851-171461f47d49] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:13:55.623965  261872 system_pods.go:61] "metrics-server-57f55c9bc5-4qm94" [bca03fa8-7c45-489c-b2fc-5834243ab91c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:13:55.623975  261872 system_pods.go:61] "storage-provisioner" [1dc12010-009c-4a23-af68-7bbba3679259] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:13:55.623990  261872 system_pods.go:74] duration metric: took 6.331708ms to wait for pod list to return data ...
	I1124 03:13:55.624007  261872 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:13:55.627151  261872 default_sa.go:45] found service account: "default"
	I1124 03:13:55.627176  261872 default_sa.go:55] duration metric: took 3.16223ms for default service account to be created ...
	I1124 03:13:55.627186  261872 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:13:55.638577  261872 system_pods.go:86] 9 kube-system pods found
	I1124 03:13:55.638621  261872 system_pods.go:89] "coredns-5dd5756b68-gfsqm" [afa1f94c-8c55-4847-9152-189f27ff812a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:13:55.638636  261872 system_pods.go:89] "etcd-old-k8s-version-838815" [6bbc2335-d9af-448e-87e7-2179d5b28065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:13:55.638649  261872 system_pods.go:89] "kindnet-rvm46" [f375e199-56a3-44e4-97fb-08f38dc56b33] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:13:55.638666  261872 system_pods.go:89] "kube-apiserver-old-k8s-version-838815" [392c3bef-1022-4055-96e3-cb0a96f804a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:13:55.638676  261872 system_pods.go:89] "kube-controller-manager-old-k8s-version-838815" [73e96a09-3a84-4bb8-8e3c-4c9804d0e497] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:13:55.638692  261872 system_pods.go:89] "kube-proxy-cz68g" [d975541d-c6d9-4d84-8dc6-4ee5db7a575f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:13:55.638701  261872 system_pods.go:89] "kube-scheduler-old-k8s-version-838815" [065763c2-fe08-4d07-9851-171461f47d49] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:13:55.638715  261872 system_pods.go:89] "metrics-server-57f55c9bc5-4qm94" [bca03fa8-7c45-489c-b2fc-5834243ab91c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:13:55.638722  261872 system_pods.go:89] "storage-provisioner" [1dc12010-009c-4a23-af68-7bbba3679259] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:13:55.638738  261872 system_pods.go:126] duration metric: took 11.545197ms to wait for k8s-apps to be running ...
	I1124 03:13:55.638749  261872 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:13:55.638817  261872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:55.663974  261872 system_svc.go:56] duration metric: took 25.216876ms WaitForService to wait for kubelet
	I1124 03:13:55.664014  261872 kubeadm.go:587] duration metric: took 3.760044799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:55.664038  261872 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:13:55.669975  261872 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:13:55.670020  261872 node_conditions.go:123] node cpu capacity is 8
	I1124 03:13:55.670042  261872 node_conditions.go:105] duration metric: took 5.99814ms to run NodePressure ...
	I1124 03:13:55.670059  261872 start.go:242] waiting for startup goroutines ...
	I1124 03:13:55.670068  261872 start.go:247] waiting for cluster config update ...
	I1124 03:13:55.670083  261872 start.go:256] writing updated cluster config ...
	I1124 03:13:55.670575  261872 ssh_runner.go:195] Run: rm -f paused
	I1124 03:13:55.676895  261872 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:13:55.682948  261872 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gfsqm" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:13:57.689262  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:13:59.689521  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:13:56.236082  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:56.236478  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:56.236527  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:56.236569  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:56.267461  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:56.267479  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:56.267483  222154 cri.go:89] found id: ""
	I1124 03:13:56.267490  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:56.267539  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.272263  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.279717  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:56.279814  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:56.316735  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:56.316763  222154 cri.go:89] found id: ""
	I1124 03:13:56.316772  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:56.316841  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.322328  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:56.322412  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:56.357228  222154 cri.go:89] found id: ""
	I1124 03:13:56.357257  222154 logs.go:282] 0 containers: []
	W1124 03:13:56.357269  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:56.357276  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:56.357332  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:56.383314  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:56.383337  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:56.383342  222154 cri.go:89] found id: ""
	I1124 03:13:56.383350  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:56.383405  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.387531  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.391426  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:56.391491  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:56.418050  222154 cri.go:89] found id: ""
	I1124 03:13:56.418074  222154 logs.go:282] 0 containers: []
	W1124 03:13:56.418084  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:56.418090  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:56.418139  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:56.444046  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:56.444065  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:56.444070  222154 cri.go:89] found id: ""
	I1124 03:13:56.444080  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:56.444136  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.448167  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.451808  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:56.451857  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:56.476763  222154 cri.go:89] found id: ""
	I1124 03:13:56.476795  222154 logs.go:282] 0 containers: []
	W1124 03:13:56.476805  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:56.476813  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:56.476862  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:56.502409  222154 cri.go:89] found id: ""
	I1124 03:13:56.502435  222154 logs.go:282] 0 containers: []
	W1124 03:13:56.502444  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:56.502455  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:56.502476  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:56.558000  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:56.558026  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:56.558043  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:56.590347  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:56.590377  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:56.629340  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:56.629377  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:56.692398  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:56.692436  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:56.725794  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:56.725822  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:56.767008  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:56.767040  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:56.806637  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:56.806666  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:56.846682  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:56.846709  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:56.899795  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:56.899831  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:56.934323  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:56.934353  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:57.054732  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:57.054764  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:59.572502  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:59.573017  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:59.573064  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:59.573114  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:59.601228  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:59.601247  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:59.601251  222154 cri.go:89] found id: ""
	I1124 03:13:59.601260  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:59.601320  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.605366  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.609257  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:59.609318  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:59.635336  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:59.635363  222154 cri.go:89] found id: ""
	I1124 03:13:59.635376  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:59.635505  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.640364  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:59.640430  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:59.667097  222154 cri.go:89] found id: ""
	I1124 03:13:59.667122  222154 logs.go:282] 0 containers: []
	W1124 03:13:59.667129  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:59.667136  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:59.667190  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:59.695992  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:59.696015  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:59.696020  222154 cri.go:89] found id: ""
	I1124 03:13:59.696028  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:59.696080  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.700222  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.703970  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:59.704022  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:59.728834  222154 cri.go:89] found id: ""
	I1124 03:13:59.728861  222154 logs.go:282] 0 containers: []
	W1124 03:13:59.728870  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:59.728877  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:59.728933  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:59.757314  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:59.757339  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:59.757345  222154 cri.go:89] found id: ""
	I1124 03:13:59.757354  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:59.757403  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.761682  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.766233  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:59.766297  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:59.798732  222154 cri.go:89] found id: ""
	I1124 03:13:59.798756  222154 logs.go:282] 0 containers: []
	W1124 03:13:59.798766  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:59.798783  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:59.798843  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:59.828107  222154 cri.go:89] found id: ""
	I1124 03:13:59.828128  222154 logs.go:282] 0 containers: []
	W1124 03:13:59.828135  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:59.828144  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:59.828155  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:59.921372  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:59.921404  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:59.935541  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:59.935570  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:59.996288  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:59.996308  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:59.996320  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:00.030411  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:00.030443  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:00.083730  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:00.083767  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:00.117527  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:00.117557  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:00.162202  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:00.162231  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:00.195840  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:14:00.195865  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:00.226785  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:00.226815  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:00.261107  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:00.261133  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:00.300154  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:00.300182  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:58.933554  256790 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:13:58.938576  256790 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:13:58.938594  256790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:13:58.952039  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:13:59.166247  256790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:13:59.166337  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-182765 minikube.k8s.io/updated_at=2025_11_24T03_13_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-182765 minikube.k8s.io/primary=true
	I1124 03:13:59.166342  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:13:59.176885  256790 ops.go:34] apiserver oom_adj: -16
	I1124 03:13:59.246724  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:13:59.747124  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:00.247534  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:00.746933  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:01.246841  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:01.747137  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:02.246868  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:02.747050  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:03.246962  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:03.747672  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:03.814528  256790 kubeadm.go:1114] duration metric: took 4.648257718s to wait for elevateKubeSystemPrivileges
	I1124 03:14:03.814569  256790 kubeadm.go:403] duration metric: took 16.563608532s to StartCluster
	I1124 03:14:03.814590  256790 settings.go:142] acquiring lock: {Name:mk05d84efd831d60555ea716cd9d2a0a41871249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.814662  256790 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:14:03.817002  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.817278  256790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:14:03.817293  256790 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:14:03.817402  256790 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:14:03.817506  256790 addons.go:70] Setting storage-provisioner=true in profile "no-preload-182765"
	I1124 03:14:03.817515  256790 addons.go:70] Setting default-storageclass=true in profile "no-preload-182765"
	I1124 03:14:03.817526  256790 addons.go:239] Setting addon storage-provisioner=true in "no-preload-182765"
	I1124 03:14:03.817542  256790 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-182765"
	I1124 03:14:03.817552  256790 config.go:182] Loaded profile config "no-preload-182765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:14:03.817557  256790 host.go:66] Checking if "no-preload-182765" exists ...
	I1124 03:14:03.817978  256790 cli_runner.go:164] Run: docker container inspect no-preload-182765 --format={{.State.Status}}
	I1124 03:14:03.818122  256790 cli_runner.go:164] Run: docker container inspect no-preload-182765 --format={{.State.Status}}
	I1124 03:14:03.819508  256790 out.go:179] * Verifying Kubernetes components...
	I1124 03:14:03.820743  256790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:14:03.848100  256790 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:14:03.849349  256790 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:14:03.849368  256790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:14:03.849424  256790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-182765
	I1124 03:14:03.850444  256790 addons.go:239] Setting addon default-storageclass=true in "no-preload-182765"
	I1124 03:14:03.850489  256790 host.go:66] Checking if "no-preload-182765" exists ...
	I1124 03:14:03.850984  256790 cli_runner.go:164] Run: docker container inspect no-preload-182765 --format={{.State.Status}}
	I1124 03:14:03.882640  256790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/no-preload-182765/id_rsa Username:docker}
	I1124 03:14:03.888690  256790 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:14:03.888714  256790 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:14:03.888824  256790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-182765
	I1124 03:14:03.911485  256790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/no-preload-182765/id_rsa Username:docker}
	I1124 03:14:03.927355  256790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:14:03.975885  256790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:14:04.003884  256790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:14:04.024866  256790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:14:04.118789  256790 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:14:04.119847  256790 node_ready.go:35] waiting up to 6m0s for node "no-preload-182765" to be "Ready" ...
	I1124 03:14:04.330452  256790 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 03:14:02.188985  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:04.189085  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:02.831564  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:14:02.831997  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:14:02.832054  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:14:02.832102  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:14:02.858999  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:02.859020  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:02.859027  222154 cri.go:89] found id: ""
	I1124 03:14:02.859034  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:14:02.859095  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.863144  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.866827  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:14:02.866895  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:14:02.894574  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:02.894592  222154 cri.go:89] found id: ""
	I1124 03:14:02.894599  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:14:02.894643  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.898881  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:14:02.898946  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:14:02.925658  222154 cri.go:89] found id: ""
	I1124 03:14:02.925683  222154 logs.go:282] 0 containers: []
	W1124 03:14:02.925693  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:14:02.925700  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:14:02.925761  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:14:02.952756  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:02.952807  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:02.952814  222154 cri.go:89] found id: ""
	I1124 03:14:02.952824  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:14:02.952872  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.956856  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.960582  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:14:02.960636  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:14:02.988059  222154 cri.go:89] found id: ""
	I1124 03:14:02.988082  222154 logs.go:282] 0 containers: []
	W1124 03:14:02.988089  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:14:02.988094  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:14:02.988143  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:14:03.016143  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:03.016181  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:03.016186  222154 cri.go:89] found id: ""
	I1124 03:14:03.016196  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:14:03.016247  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:03.020163  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:03.024013  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:14:03.024082  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:14:03.052755  222154 cri.go:89] found id: ""
	I1124 03:14:03.052790  222154 logs.go:282] 0 containers: []
	W1124 03:14:03.052801  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:14:03.052809  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:14:03.052868  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:14:03.078674  222154 cri.go:89] found id: ""
	I1124 03:14:03.078694  222154 logs.go:282] 0 containers: []
	W1124 03:14:03.078700  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:14:03.078713  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:03.078724  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:03.132465  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:03.132494  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:03.177122  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:03.177154  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:03.211154  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:14:03.211178  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:14:03.226137  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:14:03.226166  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:14:03.290769  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:14:03.290809  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:14:03.290825  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:03.330663  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:03.330693  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:03.367605  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:03.367634  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:03.395989  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:03.396020  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:03.431222  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:14:03.431267  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:14:03.537842  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:14:03.537878  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:03.572333  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:03.572364  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:06.112455  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:14:04.331391  256790 addons.go:530] duration metric: took 513.991849ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:14:04.622945  256790 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-182765" context rescaled to 1 replicas
	W1124 03:14:06.122987  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:06.693286  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:09.189222  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:11.117117  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 03:14:11.117189  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:14:11.117261  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:14:11.150063  222154 cri.go:89] found id: "e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:11.150086  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:11.150092  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:11.150096  222154 cri.go:89] found id: ""
	I1124 03:14:11.150105  222154 logs.go:282] 3 containers: [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:14:11.150166  222154 ssh_runner.go:195] Run: which crictl
	W1124 03:14:08.123105  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:10.623117  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:12.623279  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:11.189944  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:13.688591  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:11.155062  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.159119  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.163515  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:14:11.163583  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:14:11.196356  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:11.196398  222154 cri.go:89] found id: ""
	I1124 03:14:11.196409  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:14:11.196465  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.201060  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:14:11.201126  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:14:11.232445  222154 cri.go:89] found id: ""
	I1124 03:14:11.232472  222154 logs.go:282] 0 containers: []
	W1124 03:14:11.232482  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:14:11.232490  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:14:11.232556  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:14:11.263992  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:11.264013  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:11.264017  222154 cri.go:89] found id: ""
	I1124 03:14:11.264024  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:14:11.264081  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.268463  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.272372  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:14:11.272421  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:14:11.302039  222154 cri.go:89] found id: ""
	I1124 03:14:11.302062  222154 logs.go:282] 0 containers: []
	W1124 03:14:11.302069  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:14:11.302077  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:14:11.302123  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:14:11.335864  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:11.335888  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:11.335893  222154 cri.go:89] found id: ""
	I1124 03:14:11.335901  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:14:11.335956  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.340998  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.346060  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:14:11.346128  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:14:11.383326  222154 cri.go:89] found id: ""
	I1124 03:14:11.383357  222154 logs.go:282] 0 containers: []
	W1124 03:14:11.383369  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:14:11.383378  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:14:11.383439  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:14:11.413051  222154 cri.go:89] found id: ""
	I1124 03:14:11.413076  222154 logs.go:282] 0 containers: []
	W1124 03:14:11.413084  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:14:11.413093  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:14:11.413103  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:14:11.427750  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:14:11.427852  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:11.464164  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:11.464196  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:11.498481  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:11.498508  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:11.527378  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:11.527403  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:11.566711  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:14:11.566740  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:14:15.123100  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:17.622462  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:15.688863  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:18.188648  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:18.122398  256790 node_ready.go:49] node "no-preload-182765" is "Ready"
	I1124 03:14:18.122427  256790 node_ready.go:38] duration metric: took 14.002519282s for node "no-preload-182765" to be "Ready" ...
	I1124 03:14:18.122445  256790 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:14:18.122498  256790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:14:18.135618  256790 api_server.go:72] duration metric: took 14.318291491s to wait for apiserver process to appear ...
	I1124 03:14:18.135648  256790 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:14:18.135693  256790 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:14:18.140684  256790 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:14:18.141588  256790 api_server.go:141] control plane version: v1.34.1
	I1124 03:14:18.141609  256790 api_server.go:131] duration metric: took 5.953987ms to wait for apiserver health ...
	I1124 03:14:18.141618  256790 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:14:18.147887  256790 system_pods.go:59] 8 kube-system pods found
	I1124 03:14:18.147923  256790 system_pods.go:61] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:18.147930  256790 system_pods.go:61] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:18.147937  256790 system_pods.go:61] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:18.147949  256790 system_pods.go:61] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:18.147955  256790 system_pods.go:61] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:18.147959  256790 system_pods.go:61] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:18.147963  256790 system_pods.go:61] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:18.147969  256790 system_pods.go:61] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:18.147976  256790 system_pods.go:74] duration metric: took 6.352644ms to wait for pod list to return data ...
	I1124 03:14:18.147984  256790 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:14:18.150951  256790 default_sa.go:45] found service account: "default"
	I1124 03:14:18.151027  256790 default_sa.go:55] duration metric: took 3.035625ms for default service account to be created ...
	I1124 03:14:18.151038  256790 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:14:18.156382  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:18.156421  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:18.156429  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:18.156449  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:18.156456  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:18.156468  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:18.156474  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:18.156480  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:18.156487  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:18.156510  256790 retry.go:31] will retry after 296.995151ms: missing components: kube-dns
	I1124 03:14:18.457756  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:18.457829  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:18.457842  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:18.457851  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:18.457857  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:18.457862  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:18.457866  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:18.457870  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:18.457878  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:18.457892  256790 retry.go:31] will retry after 311.207422ms: missing components: kube-dns
	I1124 03:14:18.772742  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:18.772795  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:18.772801  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:18.772807  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:18.772810  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:18.772815  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:18.772818  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:18.772821  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:18.772827  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:18.772844  256790 retry.go:31] will retry after 451.19412ms: missing components: kube-dns
	I1124 03:14:19.227762  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:19.227802  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:19.227808  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:19.227815  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:19.227819  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:19.227823  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:19.227826  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:19.227829  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:19.227834  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:19.227850  256790 retry.go:31] will retry after 607.556874ms: missing components: kube-dns
	I1124 03:14:19.839632  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:19.839665  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Running
	I1124 03:14:19.839672  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:19.839676  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:19.839680  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:19.839684  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:19.839687  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:19.839691  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:19.839694  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Running
	I1124 03:14:19.839701  256790 system_pods.go:126] duration metric: took 1.688658372s to wait for k8s-apps to be running ...
	I1124 03:14:19.839712  256790 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:14:19.839755  256790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:14:19.852890  256790 system_svc.go:56] duration metric: took 13.168343ms WaitForService to wait for kubelet
	I1124 03:14:19.852957  256790 kubeadm.go:587] duration metric: took 16.035598027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:14:19.852979  256790 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:14:19.855699  256790 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:14:19.855728  256790 node_conditions.go:123] node cpu capacity is 8
	I1124 03:14:19.855745  256790 node_conditions.go:105] duration metric: took 2.761809ms to run NodePressure ...
	I1124 03:14:19.855792  256790 start.go:242] waiting for startup goroutines ...
	I1124 03:14:19.855806  256790 start.go:247] waiting for cluster config update ...
	I1124 03:14:19.855819  256790 start.go:256] writing updated cluster config ...
	I1124 03:14:19.856129  256790 ssh_runner.go:195] Run: rm -f paused
	I1124 03:14:19.861135  256790 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:14:19.865065  256790 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lcrl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.869262  256790 pod_ready.go:94] pod "coredns-66bc5c9577-lcrl8" is "Ready"
	I1124 03:14:19.869280  256790 pod_ready.go:86] duration metric: took 4.193402ms for pod "coredns-66bc5c9577-lcrl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.871213  256790 pod_ready.go:83] waiting for pod "etcd-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.874764  256790 pod_ready.go:94] pod "etcd-no-preload-182765" is "Ready"
	I1124 03:14:19.874797  256790 pod_ready.go:86] duration metric: took 3.566214ms for pod "etcd-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.876539  256790 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.880318  256790 pod_ready.go:94] pod "kube-apiserver-no-preload-182765" is "Ready"
	I1124 03:14:19.880345  256790 pod_ready.go:86] duration metric: took 3.788255ms for pod "kube-apiserver-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.882349  256790 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:20.264978  256790 pod_ready.go:94] pod "kube-controller-manager-no-preload-182765" is "Ready"
	I1124 03:14:20.265001  256790 pod_ready.go:86] duration metric: took 382.630322ms for pod "kube-controller-manager-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:20.466538  256790 pod_ready.go:83] waiting for pod "kube-proxy-fx42v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:20.865522  256790 pod_ready.go:94] pod "kube-proxy-fx42v" is "Ready"
	I1124 03:14:20.865548  256790 pod_ready.go:86] duration metric: took 398.983015ms for pod "kube-proxy-fx42v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:21.065507  256790 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:21.465719  256790 pod_ready.go:94] pod "kube-scheduler-no-preload-182765" is "Ready"
	I1124 03:14:21.465743  256790 pod_ready.go:86] duration metric: took 400.213094ms for pod "kube-scheduler-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:21.465755  256790 pod_ready.go:40] duration metric: took 1.604587225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:14:21.507898  256790 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:14:21.510018  256790 out.go:179] * Done! kubectl is now configured to use "no-preload-182765" cluster and "default" namespace by default
	W1124 03:14:20.688174  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:22.688990  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:21.627184  222154 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.06042351s)
	W1124 03:14:21.627222  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 03:14:21.627234  222154 logs.go:123] Gathering logs for kube-apiserver [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f] ...
	I1124 03:14:21.627248  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:21.662569  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:14:21.662605  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:21.701452  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:21.701475  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:21.754508  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:21.754538  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:21.790005  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:21.790032  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:21.838678  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:21.838709  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:21.869280  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:14:21.869304  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:14:24.463017  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:14:24.998983  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:39740->192.168.76.2:8443: read: connection reset by peer
	I1124 03:14:24.999054  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:14:24.999109  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:14:25.029909  222154 cri.go:89] found id: "e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:25.029932  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:25.029939  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:25.029943  222154 cri.go:89] found id: ""
	I1124 03:14:25.029951  222154 logs.go:282] 3 containers: [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:14:25.030015  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.034874  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.038716  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.042498  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:14:25.042559  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:14:25.070117  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:25.070196  222154 cri.go:89] found id: ""
	I1124 03:14:25.070218  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:14:25.070287  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.074335  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:14:25.074400  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:14:25.101919  222154 cri.go:89] found id: ""
	I1124 03:14:25.101945  222154 logs.go:282] 0 containers: []
	W1124 03:14:25.101953  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:14:25.101959  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:14:25.102003  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:14:25.128285  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:25.128306  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:25.128311  222154 cri.go:89] found id: ""
	I1124 03:14:25.128318  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:14:25.128361  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.132609  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.136499  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:14:25.136557  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:14:25.161931  222154 cri.go:89] found id: ""
	I1124 03:14:25.161951  222154 logs.go:282] 0 containers: []
	W1124 03:14:25.161959  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:14:25.161969  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:14:25.162022  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:14:25.189937  222154 cri.go:89] found id: "e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b"
	I1124 03:14:25.189956  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:25.189962  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:25.189966  222154 cri.go:89] found id: ""
	I1124 03:14:25.189976  222154 logs.go:282] 3 containers: [e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:14:25.190028  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.194426  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.198094  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.201725  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:14:25.201766  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:14:25.227045  222154 cri.go:89] found id: ""
	I1124 03:14:25.227065  222154 logs.go:282] 0 containers: []
	W1124 03:14:25.227071  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:14:25.227077  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:14:25.227119  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:14:25.254768  222154 cri.go:89] found id: ""
	I1124 03:14:25.254825  222154 logs.go:282] 0 containers: []
	W1124 03:14:25.254836  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:14:25.254848  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:14:25.254862  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:25.289249  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:25.289273  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:25.320254  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:25.320278  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:25.354084  222154 logs.go:123] Gathering logs for kube-controller-manager [e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b] ...
	I1124 03:14:25.354120  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b"
	I1124 03:14:25.382494  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:25.382525  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:25.409640  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:25.409668  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:25.441589  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:14:25.441616  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:14:25.526132  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:14:25.526164  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:14:25.541025  222154 logs.go:123] Gathering logs for kube-apiserver [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f] ...
	I1124 03:14:25.541050  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:25.574168  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:14:25.574196  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:25.607276  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:25.607300  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:25.659107  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:25.659139  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:25.695245  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:25.695274  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:25.743387  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:14:25.743415  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:14:25.800879  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1124 03:14:25.189421  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:27.688601  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:29.688767  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:28.301834  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:14:28.302307  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:14:28.302363  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:14:28.302423  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:14:28.331161  222154 cri.go:89] found id: "e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:28.331179  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:28.331183  222154 cri.go:89] found id: ""
	I1124 03:14:28.331190  222154 logs.go:282] 2 containers: [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:14:28.331234  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.335257  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.338851  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:14:28.338906  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:14:28.365611  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:28.365628  222154 cri.go:89] found id: ""
	I1124 03:14:28.365635  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:14:28.365681  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.369585  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:14:28.369637  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:14:28.395430  222154 cri.go:89] found id: ""
	I1124 03:14:28.395453  222154 logs.go:282] 0 containers: []
	W1124 03:14:28.395465  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:14:28.395474  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:14:28.395539  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:14:28.422428  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:28.422451  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:28.422459  222154 cri.go:89] found id: ""
	I1124 03:14:28.422468  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:14:28.422524  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.426610  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.430815  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:14:28.430878  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:14:28.457418  222154 cri.go:89] found id: ""
	I1124 03:14:28.457445  222154 logs.go:282] 0 containers: []
	W1124 03:14:28.457453  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:14:28.457459  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:14:28.457523  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:14:28.483306  222154 cri.go:89] found id: "e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b"
	I1124 03:14:28.483327  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:28.483333  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:28.483337  222154 cri.go:89] found id: ""
	I1124 03:14:28.483346  222154 logs.go:282] 3 containers: [e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:14:28.483402  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.487568  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.491362  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.495112  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:14:28.495167  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:14:28.520499  222154 cri.go:89] found id: ""
	I1124 03:14:28.520517  222154 logs.go:282] 0 containers: []
	W1124 03:14:28.520524  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:14:28.520530  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:14:28.520574  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:14:28.547257  222154 cri.go:89] found id: ""
	I1124 03:14:28.547284  222154 logs.go:282] 0 containers: []
	W1124 03:14:28.547297  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:14:28.547309  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:28.547324  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:28.574834  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:28.574857  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:28.622416  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:28.622444  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:28.653664  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:14:28.653697  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:14:28.749298  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:14:28.749329  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:14:28.763314  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:14:28.763341  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:14:28.821048  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:14:28.821064  222154 logs.go:123] Gathering logs for kube-apiserver [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f] ...
	I1124 03:14:28.821075  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:28.854028  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:28.854052  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:28.886240  222154 logs.go:123] Gathering logs for kube-controller-manager [e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b] ...
	I1124 03:14:28.886270  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b"
	I1124 03:14:28.914992  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:28.915020  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:28.951240  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:14:28.951269  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:28.984120  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:28.984149  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:29.038147  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:29.038180  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0e24698e04c6c       56cc512116c8f       7 seconds ago       Running             busybox                   0                   d498c4db444ad       busybox                                     default
	a907bd80f2cda       52546a367cc9e       13 seconds ago      Running             coredns                   0                   913067ccf951b       coredns-66bc5c9577-lcrl8                    kube-system
	761d04b7a866b       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   d05b913240157       storage-provisioner                         kube-system
	4e21fd2b52dea       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   8277e8f2a73fb       kindnet-ncvw4                               kube-system
	6633b13393dad       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   fcfd195e317a7       kube-proxy-fx42v                            kube-system
	0e17455baa5e8       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   f00910ba25c59       etcd-no-preload-182765                      kube-system
	81f1b5b22bae8       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   b9e8ac9695fe9       kube-apiserver-no-preload-182765            kube-system
	3ec30b5cb1d0c       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   79973ae97ccae       kube-scheduler-no-preload-182765            kube-system
	89d340829b448       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   d7070c75471eb       kube-controller-manager-no-preload-182765   kube-system
	
	
	==> containerd <==
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.309535111Z" level=info msg="CreateContainer within sandbox \"913067ccf951b9c142727d3dd7ab2a5a7999ff58eb755c533978949dd8951a76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.309991569Z" level=info msg="StartContainer for \"761d04b7a866bf1345332bb965846b86e4dc5385c0317a0cac573a55b1c77456\""
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.310913936Z" level=info msg="connecting to shim 761d04b7a866bf1345332bb965846b86e4dc5385c0317a0cac573a55b1c77456" address="unix:///run/containerd/s/88cd9c3715439028092fd9e4f0cde5501ec2b76bf1d0b3dfb1b51222af0114f1" protocol=ttrpc version=3
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.317472027Z" level=info msg="Container a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.325045995Z" level=info msg="CreateContainer within sandbox \"913067ccf951b9c142727d3dd7ab2a5a7999ff58eb755c533978949dd8951a76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f\""
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.325551646Z" level=info msg="StartContainer for \"a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f\""
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.326434580Z" level=info msg="connecting to shim a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f" address="unix:///run/containerd/s/a25a611608eb1d7c3e8553bc5734490597eaf3d3bfd095eb02083e82c3aa5de3" protocol=ttrpc version=3
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.362246253Z" level=info msg="StartContainer for \"761d04b7a866bf1345332bb965846b86e4dc5385c0317a0cac573a55b1c77456\" returns successfully"
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.373966046Z" level=info msg="StartContainer for \"a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f\" returns successfully"
	Nov 24 03:14:21 no-preload-182765 containerd[663]: time="2025-11-24T03:14:21.994520605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:cf658218-2786-43b2-a609-0e21c6244867,Namespace:default,Attempt:0,}"
	Nov 24 03:14:22 no-preload-182765 containerd[663]: time="2025-11-24T03:14:22.047166418Z" level=info msg="connecting to shim d498c4db444adf927d051b8ca8c71cfee20b5bd91cba471418685a32fed3c98c" address="unix:///run/containerd/s/89f61d7d9a72607b406d9d906e7406c38c166bfe98ccc1a55d850e5de7e78be0" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:14:22 no-preload-182765 containerd[663]: time="2025-11-24T03:14:22.119004404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:cf658218-2786-43b2-a609-0e21c6244867,Namespace:default,Attempt:0,} returns sandbox id \"d498c4db444adf927d051b8ca8c71cfee20b5bd91cba471418685a32fed3c98c\""
	Nov 24 03:14:22 no-preload-182765 containerd[663]: time="2025-11-24T03:14:22.120702528Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.224274811Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.225118452Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.226444032Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.228549959Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.229187796Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.108443078s"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.229265451Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.233347412Z" level=info msg="CreateContainer within sandbox \"d498c4db444adf927d051b8ca8c71cfee20b5bd91cba471418685a32fed3c98c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.241737160Z" level=info msg="Container 0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.247912281Z" level=info msg="CreateContainer within sandbox \"d498c4db444adf927d051b8ca8c71cfee20b5bd91cba471418685a32fed3c98c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da\""
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.248356223Z" level=info msg="StartContainer for \"0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da\""
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.249308762Z" level=info msg="connecting to shim 0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da" address="unix:///run/containerd/s/89f61d7d9a72607b406d9d906e7406c38c166bfe98ccc1a55d850e5de7e78be0" protocol=ttrpc version=3
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.298444899Z" level=info msg="StartContainer for \"0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da\" returns successfully"
	
	
	==> coredns [a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51020 - 47503 "HINFO IN 6157636609081595951.3166601699698008917. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.071843124s
	
	
	==> describe nodes <==
	Name:               no-preload-182765
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-182765
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-182765
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_13_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:13:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-182765
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:14:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:14:28 +0000   Mon, 24 Nov 2025 03:13:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:14:28 +0000   Mon, 24 Nov 2025 03:13:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:14:28 +0000   Mon, 24 Nov 2025 03:13:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:14:28 +0000   Mon, 24 Nov 2025 03:14:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-182765
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                dfa5c123-4c4a-4093-8de8-3ab7053a4f09
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-lcrl8                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-182765                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-ncvw4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-182765             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-182765    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-fx42v                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-182765             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node no-preload-182765 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node no-preload-182765 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node no-preload-182765 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node no-preload-182765 event: Registered Node no-preload-182765 in Controller
	  Normal  NodeReady                14s   kubelet          Node no-preload-182765 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [0e17455baa5e87392cbabd7c87243a3cdd8cae150abbf559b91ccdca7581766e] <==
	{"level":"warn","ts":"2025-11-24T03:13:54.780389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.797077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.807702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.821947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.826679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.839623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.845556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.855759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.866599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.874643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.884699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.893994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.909754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.916894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.923620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.934319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.945705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.966308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.981854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.990043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.996263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:55.013830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:55.023096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:55.031653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:55.089632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:14:31 up 56 min,  0 user,  load average: 2.02, 2.69, 1.91
	Linux no-preload-182765 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e21fd2b52dea181df0dd70ffe9e802ac15de322719f3e7f928b0dbf01549b41] <==
	I1124 03:14:07.433129       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:14:07.433402       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:14:07.433558       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:14:07.433574       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:14:07.433592       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:14:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:14:07.638356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:14:07.638399       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:14:07.638414       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:14:07.638545       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:14:08.038702       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:14:08.038726       1 metrics.go:72] Registering metrics
	I1124 03:14:08.038805       1 controller.go:711] "Syncing nftables rules"
	I1124 03:14:17.644323       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:14:17.644389       1 main.go:301] handling current node
	I1124 03:14:27.639352       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:14:27.639386       1 main.go:301] handling current node
	
	
	==> kube-apiserver [81f1b5b22bae8268c7c78bb74e6e2397a13fc858cb1c682e7bbefe963a285b5b] <==
	I1124 03:13:55.697109       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1124 03:13:55.702540       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:13:55.702706       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:13:55.710749       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:13:55.710889       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:13:55.733250       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:13:55.736273       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:13:56.601251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:13:56.604941       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:13:56.604960       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:13:57.082017       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:13:57.116868       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:13:57.195452       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:13:57.201177       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:13:57.202093       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:13:57.205719       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:13:57.637522       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:13:58.330350       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:13:58.338938       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:13:58.344954       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:14:03.290830       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:14:03.341569       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:14:03.346323       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:14:03.440937       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 03:14:30.790598       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:54310: use of closed network connection
	
	
	==> kube-controller-manager [89d340829b44812b31018d917cbbe98a95714f81eba44bfe7a7308537f360085] <==
	I1124 03:14:02.637248       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 03:14:02.637286       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:14:02.637345       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:14:02.637398       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:14:02.637399       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 03:14:02.637428       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:14:02.637402       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 03:14:02.637496       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-182765"
	I1124 03:14:02.637541       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:14:02.637543       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 03:14:02.637429       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:14:02.637838       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:14:02.637942       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:14:02.637970       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:14:02.638030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:14:02.638148       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:14:02.638287       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:14:02.638335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:14:02.640420       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:14:02.642208       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:14:02.642276       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:14:02.649384       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:14:02.650529       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 03:14:02.660456       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:14:22.640392       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6633b13393dad2206e1ce736b781156f8e7c78d55887d396bc37287b6aaeb952] <==
	I1124 03:14:04.211478       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:14:04.276845       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:14:04.377744       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:14:04.377824       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:14:04.377928       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:14:04.400162       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:14:04.400217       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:14:04.405337       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:14:04.406059       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:14:04.406134       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:14:04.408594       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:14:04.408607       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:14:04.408613       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:14:04.408618       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:14:04.408598       1 config.go:200] "Starting service config controller"
	I1124 03:14:04.408645       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:14:04.408655       1 config.go:309] "Starting node config controller"
	I1124 03:14:04.408767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:14:04.408833       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:14:04.509279       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:14:04.509292       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:14:04.509315       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ec30b5cb1d0c7532ea213820c5e07c77941ee5e43af8e7204cb7bf2fa9f092c] <==
	I1124 03:13:55.661447       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 03:13:55.670804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:13:55.671548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:13:55.671548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:13:55.671627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:13:55.671634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:13:55.671727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:13:55.672335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:13:55.672761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:13:55.672922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:13:55.673015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:13:55.673052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:13:55.673992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:13:55.673991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:13:55.674681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:13:55.674683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:13:55.674830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:13:56.583081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:13:56.598203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:13:56.599124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:13:56.646890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:13:56.813350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:13:56.815310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:13:56.963827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 03:13:59.960968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:13:59 no-preload-182765 kubelet[2164]: I1124 03:13:59.220367    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-182765" podStartSLOduration=1.22034543 podStartE2EDuration="1.22034543s" podCreationTimestamp="2025-11-24 03:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:59.207330931 +0000 UTC m=+1.126221766" watchObservedRunningTime="2025-11-24 03:13:59.22034543 +0000 UTC m=+1.139236255"
	Nov 24 03:13:59 no-preload-182765 kubelet[2164]: I1124 03:13:59.229558    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-182765" podStartSLOduration=1.229535176 podStartE2EDuration="1.229535176s" podCreationTimestamp="2025-11-24 03:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:59.22054426 +0000 UTC m=+1.139435091" watchObservedRunningTime="2025-11-24 03:13:59.229535176 +0000 UTC m=+1.148426009"
	Nov 24 03:13:59 no-preload-182765 kubelet[2164]: I1124 03:13:59.241368    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-182765" podStartSLOduration=1.241345133 podStartE2EDuration="1.241345133s" podCreationTimestamp="2025-11-24 03:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:59.230192246 +0000 UTC m=+1.149083079" watchObservedRunningTime="2025-11-24 03:13:59.241345133 +0000 UTC m=+1.160235965"
	Nov 24 03:13:59 no-preload-182765 kubelet[2164]: I1124 03:13:59.250144    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-182765" podStartSLOduration=1.250126869 podStartE2EDuration="1.250126869s" podCreationTimestamp="2025-11-24 03:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:59.241542645 +0000 UTC m=+1.160433468" watchObservedRunningTime="2025-11-24 03:13:59.250126869 +0000 UTC m=+1.169017700"
	Nov 24 03:14:02 no-preload-182765 kubelet[2164]: I1124 03:14:02.703848    2164 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:14:02 no-preload-182765 kubelet[2164]: I1124 03:14:02.704454    2164 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486031    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8c52d6-d4fd-4be2-8246-f96d95997a62-lib-modules\") pod \"kube-proxy-fx42v\" (UID: \"4c8c52d6-d4fd-4be2-8246-f96d95997a62\") " pod="kube-system/kube-proxy-fx42v"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486078    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhkp\" (UniqueName: \"kubernetes.io/projected/4c8c52d6-d4fd-4be2-8246-f96d95997a62-kube-api-access-kkhkp\") pod \"kube-proxy-fx42v\" (UID: \"4c8c52d6-d4fd-4be2-8246-f96d95997a62\") " pod="kube-system/kube-proxy-fx42v"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486103    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d2a43f2-69e3-4768-8e15-39fbe53d92f9-lib-modules\") pod \"kindnet-ncvw4\" (UID: \"6d2a43f2-69e3-4768-8e15-39fbe53d92f9\") " pod="kube-system/kindnet-ncvw4"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486169    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8c52d6-d4fd-4be2-8246-f96d95997a62-xtables-lock\") pod \"kube-proxy-fx42v\" (UID: \"4c8c52d6-d4fd-4be2-8246-f96d95997a62\") " pod="kube-system/kube-proxy-fx42v"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486203    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c8c52d6-d4fd-4be2-8246-f96d95997a62-kube-proxy\") pod \"kube-proxy-fx42v\" (UID: \"4c8c52d6-d4fd-4be2-8246-f96d95997a62\") " pod="kube-system/kube-proxy-fx42v"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486226    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6d2a43f2-69e3-4768-8e15-39fbe53d92f9-cni-cfg\") pod \"kindnet-ncvw4\" (UID: \"6d2a43f2-69e3-4768-8e15-39fbe53d92f9\") " pod="kube-system/kindnet-ncvw4"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486256    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d2a43f2-69e3-4768-8e15-39fbe53d92f9-xtables-lock\") pod \"kindnet-ncvw4\" (UID: \"6d2a43f2-69e3-4768-8e15-39fbe53d92f9\") " pod="kube-system/kindnet-ncvw4"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486283    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv66f\" (UniqueName: \"kubernetes.io/projected/6d2a43f2-69e3-4768-8e15-39fbe53d92f9-kube-api-access-kv66f\") pod \"kindnet-ncvw4\" (UID: \"6d2a43f2-69e3-4768-8e15-39fbe53d92f9\") " pod="kube-system/kindnet-ncvw4"
	Nov 24 03:14:04 no-preload-182765 kubelet[2164]: I1124 03:14:04.209871    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fx42v" podStartSLOduration=1.209850023 podStartE2EDuration="1.209850023s" podCreationTimestamp="2025-11-24 03:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:14:04.209837062 +0000 UTC m=+6.128727894" watchObservedRunningTime="2025-11-24 03:14:04.209850023 +0000 UTC m=+6.128740856"
	Nov 24 03:14:08 no-preload-182765 kubelet[2164]: I1124 03:14:08.223011    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ncvw4" podStartSLOduration=2.417680204 podStartE2EDuration="5.222986026s" podCreationTimestamp="2025-11-24 03:14:03 +0000 UTC" firstStartedPulling="2025-11-24 03:14:04.336369549 +0000 UTC m=+6.255260360" lastFinishedPulling="2025-11-24 03:14:07.141675371 +0000 UTC m=+9.060566182" observedRunningTime="2025-11-24 03:14:08.222733252 +0000 UTC m=+10.141624086" watchObservedRunningTime="2025-11-24 03:14:08.222986026 +0000 UTC m=+10.141876858"
	Nov 24 03:14:17 no-preload-182765 kubelet[2164]: I1124 03:14:17.737616    2164 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:14:18 no-preload-182765 kubelet[2164]: I1124 03:14:18.002308    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4km\" (UniqueName: \"kubernetes.io/projected/3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7-kube-api-access-9f4km\") pod \"coredns-66bc5c9577-lcrl8\" (UID: \"3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7\") " pod="kube-system/coredns-66bc5c9577-lcrl8"
	Nov 24 03:14:18 no-preload-182765 kubelet[2164]: I1124 03:14:18.002375    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1-tmp\") pod \"storage-provisioner\" (UID: \"271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1\") " pod="kube-system/storage-provisioner"
	Nov 24 03:14:18 no-preload-182765 kubelet[2164]: I1124 03:14:18.002464    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7-config-volume\") pod \"coredns-66bc5c9577-lcrl8\" (UID: \"3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7\") " pod="kube-system/coredns-66bc5c9577-lcrl8"
	Nov 24 03:14:18 no-preload-182765 kubelet[2164]: I1124 03:14:18.002501    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw8fd\" (UniqueName: \"kubernetes.io/projected/271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1-kube-api-access-xw8fd\") pod \"storage-provisioner\" (UID: \"271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1\") " pod="kube-system/storage-provisioner"
	Nov 24 03:14:19 no-preload-182765 kubelet[2164]: I1124 03:14:19.246396    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lcrl8" podStartSLOduration=16.246372758 podStartE2EDuration="16.246372758s" podCreationTimestamp="2025-11-24 03:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:14:19.246339636 +0000 UTC m=+21.165230468" watchObservedRunningTime="2025-11-24 03:14:19.246372758 +0000 UTC m=+21.165263590"
	Nov 24 03:14:19 no-preload-182765 kubelet[2164]: I1124 03:14:19.267821    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.267667894 podStartE2EDuration="15.267667894s" podCreationTimestamp="2025-11-24 03:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:14:19.266421815 +0000 UTC m=+21.185312651" watchObservedRunningTime="2025-11-24 03:14:19.267667894 +0000 UTC m=+21.186558717"
	Nov 24 03:14:21 no-preload-182765 kubelet[2164]: I1124 03:14:21.723163    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrk5\" (UniqueName: \"kubernetes.io/projected/cf658218-2786-43b2-a609-0e21c6244867-kube-api-access-mgrk5\") pod \"busybox\" (UID: \"cf658218-2786-43b2-a609-0e21c6244867\") " pod="default/busybox"
	Nov 24 03:14:25 no-preload-182765 kubelet[2164]: I1124 03:14:25.263717    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.153949098 podStartE2EDuration="4.263693817s" podCreationTimestamp="2025-11-24 03:14:21 +0000 UTC" firstStartedPulling="2025-11-24 03:14:22.120371533 +0000 UTC m=+24.039262350" lastFinishedPulling="2025-11-24 03:14:24.230116258 +0000 UTC m=+26.149007069" observedRunningTime="2025-11-24 03:14:25.263594247 +0000 UTC m=+27.182485067" watchObservedRunningTime="2025-11-24 03:14:25.263693817 +0000 UTC m=+27.182584649"
	
	
	==> storage-provisioner [761d04b7a866bf1345332bb965846b86e4dc5385c0317a0cac573a55b1c77456] <==
	I1124 03:14:18.372229       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:14:18.381064       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:14:18.381112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:14:18.382889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.388152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:14:18.388341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:14:18.388482       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6db7df4-b085-465b-be4e-b02a26c1b5f7", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-182765_f7a9146e-611c-4253-9310-4f29c0034e99 became leader
	I1124 03:14:18.388520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-182765_f7a9146e-611c-4253-9310-4f29c0034e99!
	W1124 03:14:18.390559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.393230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:14:18.488826       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-182765_f7a9146e-611c-4253-9310-4f29c0034e99!
	W1124 03:14:20.396070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:20.401005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:22.404078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:22.408760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:24.411757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:24.416610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:26.419866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:26.423831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:28.426840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:28.430746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:30.433252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:30.436939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-182765 -n no-preload-182765
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-182765 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-182765
helpers_test.go:243: (dbg) docker inspect no-preload-182765:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880",
	        "Created": "2025-11-24T03:13:28.878660504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257533,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:13:28.922498494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880/hosts",
	        "LogPath": "/var/lib/docker/containers/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880/7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880-json.log",
	        "Name": "/no-preload-182765",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-182765:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-182765",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7a0eb0a9c43e7eb40e5b6365edb470d5529a62de6099eafac357389dffcf3880",
	                "LowerDir": "/var/lib/docker/overlay2/5b3cd16322ccef02ae6a882d84c589ac763afc9604c420b3747093b3ecd2eddd-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b3cd16322ccef02ae6a882d84c589ac763afc9604c420b3747093b3ecd2eddd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b3cd16322ccef02ae6a882d84c589ac763afc9604c420b3747093b3ecd2eddd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b3cd16322ccef02ae6a882d84c589ac763afc9604c420b3747093b3ecd2eddd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-182765",
	                "Source": "/var/lib/docker/volumes/no-preload-182765/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-182765",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-182765",
	                "name.minikube.sigs.k8s.io": "no-preload-182765",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ab2b2ed6b1f842385b05cc9590337eeded4e73a971730d3cf9b9594009bfef09",
	            "SandboxKey": "/var/run/docker/netns/ab2b2ed6b1f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-182765": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e3f4179ae31456aea033a2bb15d23923301eed3e80090edbf7ca8514d0dcff5",
	                    "EndpointID": "4ac9c9278eab5dbe766429379a42e25b92f182e29063d12f5875257cb9ba99cc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ae:69:18:c9:42:43",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-182765",
	                        "7a0eb0a9c43e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-182765 -n no-preload-182765
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-182765 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-182765 logs -n 25: (1.054284357s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-682898 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo containerd config dump                                                                                                                                                                                                        │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ ssh     │ -p cilium-682898 sudo crio config                                                                                                                                                                                                                   │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p cilium-682898                                                                                                                                                                                                                                    │ cilium-682898          │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:13 UTC │
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ stop    │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p NoKubernetes-502612 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612    │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-182765      │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-838815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ stop    │ -p old-k8s-version-838815 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-838815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-838815 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:14 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:13:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:13:45.063573  261872 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:45.063693  261872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:45.063705  261872 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:45.063709  261872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:45.063942  261872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:13:45.064411  261872 out.go:368] Setting JSON to false
	I1124 03:13:45.065542  261872 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3368,"bootTime":1763950657,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:13:45.065595  261872 start.go:143] virtualization: kvm guest
	I1124 03:13:45.067548  261872 out.go:179] * [old-k8s-version-838815] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:13:45.068712  261872 notify.go:221] Checking for updates...
	I1124 03:13:45.068742  261872 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:13:45.070032  261872 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:13:45.071265  261872 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:13:45.072490  261872 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 03:13:45.073808  261872 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:13:45.075093  261872 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:13:45.076623  261872 config.go:182] Loaded profile config "old-k8s-version-838815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:13:45.078393  261872 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1124 03:13:45.079510  261872 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:13:45.104663  261872 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:13:45.104768  261872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:45.164545  261872 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-24 03:13:45.15467142 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:45.164668  261872 docker.go:319] overlay module found
	I1124 03:13:45.167182  261872 out.go:179] * Using the docker driver based on existing profile
	I1124 03:13:45.168219  261872 start.go:309] selected driver: docker
	I1124 03:13:45.168233  261872 start.go:927] validating driver "docker" against &{Name:old-k8s-version-838815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:45.168316  261872 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:13:45.168853  261872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:45.229002  261872 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-24 03:13:45.218604033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:45.229294  261872 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:45.229329  261872 cni.go:84] Creating CNI manager for ""
	I1124 03:13:45.229391  261872 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:45.229434  261872 start.go:353] cluster config:
	{Name:old-k8s-version-838815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:45.231198  261872 out.go:179] * Starting "old-k8s-version-838815" primary control-plane node in "old-k8s-version-838815" cluster
	I1124 03:13:45.232502  261872 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:13:45.233810  261872 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:13:45.234991  261872 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:13:45.235026  261872 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1124 03:13:45.235032  261872 cache.go:65] Caching tarball of preloaded images
	I1124 03:13:45.235070  261872 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:13:45.235114  261872 preload.go:238] Found /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 03:13:45.235126  261872 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1124 03:13:45.235252  261872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/config.json ...
	I1124 03:13:45.255571  261872 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:13:45.255589  261872 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:13:45.255605  261872 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:13:45.255644  261872 start.go:360] acquireMachinesLock for old-k8s-version-838815: {Name:mk8b693c5097c108d6caf8578d5d3410ead3ca46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:45.255709  261872 start.go:364] duration metric: took 42.605µs to acquireMachinesLock for "old-k8s-version-838815"
	I1124 03:13:45.255731  261872 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:13:45.255740  261872 fix.go:54] fixHost starting: 
	I1124 03:13:45.255971  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:45.273238  261872 fix.go:112] recreateIfNeeded on old-k8s-version-838815: state=Stopped err=<nil>
	W1124 03:13:45.273266  261872 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:13:42.782440  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:42.782910  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:42.782966  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:42.783023  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:42.815947  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:42.815972  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:42.815978  222154 cri.go:89] found id: ""
	I1124 03:13:42.815988  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:42.816048  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.821068  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.825377  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:42.825439  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:42.857111  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:42.857130  222154 cri.go:89] found id: ""
	I1124 03:13:42.857140  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:42.857196  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.862037  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:42.862106  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:42.894686  222154 cri.go:89] found id: ""
	I1124 03:13:42.894714  222154 logs.go:282] 0 containers: []
	W1124 03:13:42.894724  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:42.894731  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:42.894817  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:42.926397  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:42.926419  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:42.926424  222154 cri.go:89] found id: ""
	I1124 03:13:42.926434  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:42.926490  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.931201  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:42.935486  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:42.935550  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:42.968690  222154 cri.go:89] found id: ""
	I1124 03:13:42.968725  222154 logs.go:282] 0 containers: []
	W1124 03:13:42.968736  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:42.968744  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:42.968831  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:43.001388  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:43.001409  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:43.001416  222154 cri.go:89] found id: ""
	I1124 03:13:43.001424  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:43.001476  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:43.005816  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:43.010343  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:43.010405  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:43.041179  222154 cri.go:89] found id: ""
	I1124 03:13:43.041206  222154 logs.go:282] 0 containers: []
	W1124 03:13:43.041234  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:43.041243  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:43.041300  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:43.071844  222154 cri.go:89] found id: ""
	I1124 03:13:43.071871  222154 logs.go:282] 0 containers: []
	W1124 03:13:43.071882  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:43.071894  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:43.071907  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:43.182610  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:43.182650  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:43.256109  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:43.256129  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:43.256143  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:43.295130  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:43.295166  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:43.336837  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:43.336877  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:43.375760  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:43.375812  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:43.409160  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:43.409182  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:43.454125  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:43.454159  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:43.472184  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:43.472214  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:43.536093  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:43.536127  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:43.578848  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:43.578883  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:43.634581  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:43.634620  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:44.399555  256790 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.136724738s)
	I1124 03:13:44.399580  256790 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-4883/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:13:44.399600  256790 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:13:44.399642  256790 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:13:44.821198  256790 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-4883/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:13:44.821237  256790 cache_images.go:125] Successfully loaded all cached images
	I1124 03:13:44.821243  256790 cache_images.go:94] duration metric: took 10.161497332s to LoadCachedImages
	I1124 03:13:44.821257  256790 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1124 03:13:44.821363  256790 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-182765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-182765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:13:44.821420  256790 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:13:44.851903  256790 cni.go:84] Creating CNI manager for ""
	I1124 03:13:44.851920  256790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:44.851931  256790 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:13:44.851952  256790 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-182765 NodeName:no-preload-182765 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:13:44.852066  256790 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-182765"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:13:44.852118  256790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:13:44.861657  256790 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:13:44.861719  256790 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:13:44.869963  256790 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 03:13:44.870050  256790 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:13:44.870081  256790 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 03:13:44.870164  256790 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 03:13:44.873929  256790 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:13:44.873957  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 03:13:45.755081  256790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:45.769749  256790 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:13:45.773873  256790 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:13:45.773907  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 03:13:45.870033  256790 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:13:45.876220  256790 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:13:45.876253  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 03:13:46.121148  256790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:13:46.128974  256790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1124 03:13:46.142040  256790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:13:46.302008  256790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1124 03:13:46.315155  256790 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:13:46.319270  256790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:46.365839  256790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:46.454310  256790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:46.478140  256790 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765 for IP: 192.168.85.2
	I1124 03:13:46.478161  256790 certs.go:195] generating shared ca certs ...
	I1124 03:13:46.478180  256790 certs.go:227] acquiring lock for ca certs: {Name:mkd28e9f2e8e31fe23d0ba27851eb0df56d94420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.478333  256790 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key
	I1124 03:13:46.478398  256790 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key
	I1124 03:13:46.478412  256790 certs.go:257] generating profile certs ...
	I1124 03:13:46.478485  256790 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.key
	I1124 03:13:46.478501  256790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt with IP's: []
	I1124 03:13:46.646111  256790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt ...
	I1124 03:13:46.646143  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: {Name:mk73539b3f54c1961564b6a79fff2497576cb92b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.646339  256790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.key ...
	I1124 03:13:46.646352  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.key: {Name:mk58ceb1530d77d90debb469585bea533f41da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.646449  256790 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key.cdf44a03
	I1124 03:13:46.646469  256790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt.cdf44a03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:13:46.816691  256790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt.cdf44a03 ...
	I1124 03:13:46.816717  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt.cdf44a03: {Name:mk27d6c8cc3794b3c9d0a9b94e935219741af6b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.816901  256790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key.cdf44a03 ...
	I1124 03:13:46.816918  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key.cdf44a03: {Name:mk1d546ce94c496d8da0bcf0c05eba41706e1518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.817006  256790 certs.go:382] copying /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt.cdf44a03 -> /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt
	I1124 03:13:46.817097  256790 certs.go:386] copying /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key.cdf44a03 -> /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key
	I1124 03:13:46.817157  256790 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.key
	I1124 03:13:46.817178  256790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.crt with IP's: []
	I1124 03:13:46.857388  256790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.crt ...
	I1124 03:13:46.857425  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.crt: {Name:mk44f1fd8866b0e73a0df7a8d224ae9f9cfeb9bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.857606  256790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.key ...
	I1124 03:13:46.857626  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.key: {Name:mk6b674d94e7b9f3efc4ba5a0be39c3c8820e891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:46.857894  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem (1338 bytes)
	W1124 03:13:46.857957  256790 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429_empty.pem, impossibly tiny 0 bytes
	I1124 03:13:46.857968  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:13:46.858004  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:13:46.858036  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:13:46.858072  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem (1679 bytes)
	I1124 03:13:46.858143  256790 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:13:46.858963  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:13:46.878497  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:13:46.896895  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:13:46.914015  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:13:46.932134  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:13:46.949903  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:13:46.966726  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:13:46.985102  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:13:47.002201  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem --> /usr/share/ca-certificates/8429.pem (1338 bytes)
	I1124 03:13:47.023536  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /usr/share/ca-certificates/84292.pem (1708 bytes)
	I1124 03:13:47.040674  256790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:13:47.057844  256790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:13:47.069955  256790 ssh_runner.go:195] Run: openssl version
	I1124 03:13:47.076042  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8429.pem && ln -fs /usr/share/ca-certificates/8429.pem /etc/ssl/certs/8429.pem"
	I1124 03:13:47.084314  256790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8429.pem
	I1124 03:13:47.088076  256790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/8429.pem
	I1124 03:13:47.088133  256790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8429.pem
	I1124 03:13:47.122851  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8429.pem /etc/ssl/certs/51391683.0"
	I1124 03:13:47.131586  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84292.pem && ln -fs /usr/share/ca-certificates/84292.pem /etc/ssl/certs/84292.pem"
	I1124 03:13:47.140008  256790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84292.pem
	I1124 03:13:47.143689  256790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/84292.pem
	I1124 03:13:47.143757  256790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84292.pem
	I1124 03:13:47.178455  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:13:47.187236  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:13:47.195760  256790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:47.199811  256790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:47.199865  256790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:47.238221  256790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:13:47.247116  256790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:13:47.250904  256790 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:13:47.250966  256790 kubeadm.go:401] StartCluster: {Name:no-preload-182765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-182765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:47.251063  256790 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:13:47.251115  256790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:13:47.277332  256790 cri.go:89] found id: ""
	I1124 03:13:47.277405  256790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:13:47.285583  256790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:13:47.293758  256790 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:13:47.293850  256790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:13:47.302034  256790 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:13:47.302058  256790 kubeadm.go:158] found existing configuration files:
	
	I1124 03:13:47.302121  256790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:13:47.310408  256790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:13:47.310462  256790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:13:47.318990  256790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:13:47.327190  256790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:13:47.327239  256790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:13:47.334835  256790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:13:47.342219  256790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:13:47.342274  256790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:13:47.349204  256790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:13:47.356687  256790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:13:47.356732  256790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:13:47.363898  256790 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:13:47.398897  256790 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:13:47.398952  256790 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:13:47.418532  256790 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:13:47.418630  256790 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:13:47.418675  256790 kubeadm.go:319] OS: Linux
	I1124 03:13:47.418732  256790 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:13:47.418805  256790 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:13:47.418868  256790 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:13:47.418933  256790 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:13:47.419002  256790 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:13:47.419073  256790 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:13:47.419156  256790 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:13:47.419255  256790 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:13:47.477815  256790 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:13:47.477986  256790 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:13:47.478155  256790 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:13:47.482588  256790 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:13:47.485410  256790 out.go:252]   - Generating certificates and keys ...
	I1124 03:13:47.485509  256790 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:13:47.485602  256790 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:13:47.512216  256790 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:13:47.791516  256790 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:13:45.275046  261872 out.go:252] * Restarting existing docker container for "old-k8s-version-838815" ...
	I1124 03:13:45.275119  261872 cli_runner.go:164] Run: docker start old-k8s-version-838815
	I1124 03:13:45.620024  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:45.640739  261872 kic.go:430] container "old-k8s-version-838815" state is running.
	I1124 03:13:45.641204  261872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-838815
	I1124 03:13:45.662968  261872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/config.json ...
	I1124 03:13:45.663231  261872 machine.go:94] provisionDockerMachine start ...
	I1124 03:13:45.663313  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:45.683416  261872 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:45.683656  261872 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33072 <nil> <nil>}
	I1124 03:13:45.683670  261872 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:13:45.684408  261872 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44748->127.0.0.1:33072: read: connection reset by peer
	I1124 03:13:48.827423  261872 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-838815
	
	I1124 03:13:48.827455  261872 ubuntu.go:182] provisioning hostname "old-k8s-version-838815"
	I1124 03:13:48.827530  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:48.849134  261872 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:48.849485  261872 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33072 <nil> <nil>}
	I1124 03:13:48.849506  261872 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-838815 && echo "old-k8s-version-838815" | sudo tee /etc/hostname
	I1124 03:13:48.998034  261872 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-838815
	
	I1124 03:13:48.998122  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.017196  261872 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:49.017467  261872 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33072 <nil> <nil>}
	I1124 03:13:49.017485  261872 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-838815' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-838815/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-838815' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:13:49.157389  261872 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:13:49.157422  261872 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 03:13:49.157439  261872 ubuntu.go:190] setting up certificates
	I1124 03:13:49.157459  261872 provision.go:84] configureAuth start
	I1124 03:13:49.157517  261872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-838815
	I1124 03:13:49.175307  261872 provision.go:143] copyHostCerts
	I1124 03:13:49.175364  261872 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 03:13:49.175381  261872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 03:13:49.175448  261872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 03:13:49.175546  261872 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 03:13:49.175564  261872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 03:13:49.175593  261872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 03:13:49.175660  261872 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 03:13:49.175668  261872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 03:13:49.175690  261872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 03:13:49.175751  261872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-838815 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-838815]
	I1124 03:13:49.251404  261872 provision.go:177] copyRemoteCerts
	I1124 03:13:49.251471  261872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:13:49.251502  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.270581  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.370991  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:13:49.388399  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 03:13:49.405594  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:13:49.422948  261872 provision.go:87] duration metric: took 265.476289ms to configureAuth
	I1124 03:13:49.422978  261872 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:13:49.423159  261872 config.go:182] Loaded profile config "old-k8s-version-838815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:13:49.423177  261872 machine.go:97] duration metric: took 3.759931545s to provisionDockerMachine
	I1124 03:13:49.423186  261872 start.go:293] postStartSetup for "old-k8s-version-838815" (driver="docker")
	I1124 03:13:49.423205  261872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:13:49.423257  261872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:13:49.423288  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.442031  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.549181  261872 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:13:49.552968  261872 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:13:49.553001  261872 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:13:49.553014  261872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 03:13:49.553084  261872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 03:13:49.553182  261872 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem -> 84292.pem in /etc/ssl/certs
	I1124 03:13:49.553299  261872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:13:49.562836  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:13:49.582845  261872 start.go:296] duration metric: took 159.631045ms for postStartSetup
	I1124 03:13:49.582937  261872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:13:49.582984  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.603746  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.704223  261872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:13:49.709257  261872 fix.go:56] duration metric: took 4.453511671s for fixHost
	I1124 03:13:49.709278  261872 start.go:83] releasing machines lock for "old-k8s-version-838815", held for 4.453557618s
	I1124 03:13:49.709339  261872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-838815
	I1124 03:13:49.729262  261872 ssh_runner.go:195] Run: cat /version.json
	I1124 03:13:49.729344  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.729357  261872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:13:49.729455  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:49.750433  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.750853  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:49.904900  261872 ssh_runner.go:195] Run: systemctl --version
	I1124 03:13:49.912654  261872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:13:49.917879  261872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:13:49.917995  261872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:13:49.926022  261872 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:13:49.926045  261872 start.go:496] detecting cgroup driver to use...
	I1124 03:13:49.926077  261872 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:13:49.926117  261872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:13:49.945996  261872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:13:49.961277  261872 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:13:49.961353  261872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:13:49.978494  261872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:13:49.993905  261872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:13:50.082335  261872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:13:50.180070  261872 docker.go:234] disabling docker service ...
	I1124 03:13:50.180148  261872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:13:50.197022  261872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:13:50.210855  261872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:13:50.296272  261872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:13:50.382358  261872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:13:50.395675  261872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:13:50.409575  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 03:13:50.418375  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:13:50.427099  261872 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 03:13:50.427158  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 03:13:50.435870  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:13:50.444942  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:13:50.453675  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:13:50.462003  261872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:13:50.469918  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:13:50.478445  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:13:50.486727  261872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:13:50.495415  261872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:13:50.502528  261872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:13:50.509670  261872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:50.589308  261872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:13:50.703385  261872 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:13:50.703462  261872 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:13:50.707755  261872 start.go:564] Will wait 60s for crictl version
	I1124 03:13:50.707827  261872 ssh_runner.go:195] Run: which crictl
	I1124 03:13:50.711579  261872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:13:50.738380  261872 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:13:50.738444  261872 ssh_runner.go:195] Run: containerd --version
	I1124 03:13:50.759417  261872 ssh_runner.go:195] Run: containerd --version
	I1124 03:13:50.783375  261872 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 03:13:46.174035  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:46.174513  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:46.174580  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:46.174641  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:46.202025  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:46.202045  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:46.202049  222154 cri.go:89] found id: ""
	I1124 03:13:46.202056  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:46.202106  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.206050  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.209735  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:46.209834  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:46.238096  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:46.238119  222154 cri.go:89] found id: ""
	I1124 03:13:46.238128  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:46.238199  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.242110  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:46.242175  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:46.267229  222154 cri.go:89] found id: ""
	I1124 03:13:46.267263  222154 logs.go:282] 0 containers: []
	W1124 03:13:46.267270  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:46.267276  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:46.267319  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:46.293274  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:46.293297  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:46.293304  222154 cri.go:89] found id: ""
	I1124 03:13:46.293316  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:46.293374  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.297360  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.301203  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:46.301264  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:46.330284  222154 cri.go:89] found id: ""
	I1124 03:13:46.330304  222154 logs.go:282] 0 containers: []
	W1124 03:13:46.330311  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:46.330320  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:46.330364  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:46.357550  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:46.357569  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:46.357572  222154 cri.go:89] found id: ""
	I1124 03:13:46.357579  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:46.357631  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.361542  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:46.365711  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:46.365789  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:46.391493  222154 cri.go:89] found id: ""
	I1124 03:13:46.391520  222154 logs.go:282] 0 containers: []
	W1124 03:13:46.391531  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:46.391538  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:46.391600  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:46.422363  222154 cri.go:89] found id: ""
	I1124 03:13:46.422390  222154 logs.go:282] 0 containers: []
	W1124 03:13:46.422398  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:46.422408  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:46.422418  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:46.466844  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:46.466872  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:46.565012  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:46.565044  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:46.600024  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:46.600051  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:46.664261  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:46.664292  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:46.695951  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:46.695980  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:46.724289  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:46.724318  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:46.761731  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:46.761760  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:46.794474  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:46.794500  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:46.808440  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:46.808473  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:46.866985  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:46.867013  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:46.867028  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:46.898817  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:46.898847  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:49.433841  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:49.434243  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:49.434305  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:49.434360  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:49.465869  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:49.465892  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:49.465898  222154 cri.go:89] found id: ""
	I1124 03:13:49.465906  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:49.465956  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.470302  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.474402  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:49.474458  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:49.499866  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:49.499886  222154 cri.go:89] found id: ""
	I1124 03:13:49.499895  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:49.499944  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.503761  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:49.503857  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:49.529482  222154 cri.go:89] found id: ""
	I1124 03:13:49.529509  222154 logs.go:282] 0 containers: []
	W1124 03:13:49.529517  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:49.529523  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:49.529575  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:49.560512  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:49.560535  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:49.560541  222154 cri.go:89] found id: ""
	I1124 03:13:49.560550  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:49.560606  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.565155  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.568964  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:49.569024  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:49.602037  222154 cri.go:89] found id: ""
	I1124 03:13:49.602064  222154 logs.go:282] 0 containers: []
	W1124 03:13:49.602076  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:49.602083  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:49.602136  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:49.630842  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:49.630865  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:49.630871  222154 cri.go:89] found id: ""
	I1124 03:13:49.630880  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:49.630931  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.635044  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:49.638687  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:49.638741  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:49.665239  222154 cri.go:89] found id: ""
	I1124 03:13:49.665261  222154 logs.go:282] 0 containers: []
	W1124 03:13:49.665269  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:49.665274  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:49.665326  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:49.692995  222154 cri.go:89] found id: ""
	I1124 03:13:49.693017  222154 logs.go:282] 0 containers: []
	W1124 03:13:49.693025  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:49.693035  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:49.693045  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:49.727824  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:49.727851  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:49.800927  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:49.800963  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:49.836038  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:49.836063  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:49.882066  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:49.882105  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:49.914295  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:49.914324  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:49.928107  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:49.928142  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:49.961137  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:49.961167  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:49.997955  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:49.997982  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:50.030344  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:50.030389  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:50.126887  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:50.126920  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:50.194493  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:50.194516  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:50.194531  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:48.089976  256790 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:13:48.671180  256790 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:13:48.847653  256790 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:13:48.847833  256790 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-182765] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:13:49.113359  256790 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:13:49.113541  256790 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-182765] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:13:49.259626  256790 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:13:49.550081  256790 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:13:49.833155  256790 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:13:49.833287  256790 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:13:50.068112  256790 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:13:50.349879  256790 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:13:50.396376  256790 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:13:50.845181  256790 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:13:51.371552  256790 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:13:51.372143  256790 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:13:51.375886  256790 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:13:50.784864  261872 cli_runner.go:164] Run: docker network inspect old-k8s-version-838815 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:13:50.803745  261872 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:13:50.808156  261872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:50.818521  261872 kubeadm.go:884] updating cluster {Name:old-k8s-version-838815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:13:50.818658  261872 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:13:50.818721  261872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:13:50.844347  261872 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:13:50.844369  261872 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:13:50.844426  261872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:13:50.870116  261872 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:13:50.870140  261872 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:13:50.870147  261872 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1124 03:13:50.870271  261872 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-838815 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:13:50.870331  261872 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:13:50.895009  261872 cni.go:84] Creating CNI manager for ""
	I1124 03:13:50.895030  261872 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:50.895042  261872 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:13:50.895061  261872 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-838815 NodeName:old-k8s-version-838815 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:13:50.895166  261872 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-838815"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:13:50.895220  261872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 03:13:50.903262  261872 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:13:50.903330  261872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:13:50.910939  261872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 03:13:50.923552  261872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:13:50.936014  261872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1124 03:13:50.948693  261872 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:13:50.952277  261872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:50.961934  261872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:51.042374  261872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:51.074826  261872 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815 for IP: 192.168.94.2
	I1124 03:13:51.074846  261872 certs.go:195] generating shared ca certs ...
	I1124 03:13:51.074865  261872 certs.go:227] acquiring lock for ca certs: {Name:mkd28e9f2e8e31fe23d0ba27851eb0df56d94420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:51.075047  261872 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key
	I1124 03:13:51.075114  261872 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key
	I1124 03:13:51.075126  261872 certs.go:257] generating profile certs ...
	I1124 03:13:51.075227  261872 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.key
	I1124 03:13:51.075311  261872 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/apiserver.key.1d226222
	I1124 03:13:51.075433  261872 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/proxy-client.key
	I1124 03:13:51.075576  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem (1338 bytes)
	W1124 03:13:51.075619  261872 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429_empty.pem, impossibly tiny 0 bytes
	I1124 03:13:51.075632  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:13:51.075682  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:13:51.075740  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:13:51.075797  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem (1679 bytes)
	I1124 03:13:51.075862  261872 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:13:51.076633  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:13:51.095698  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:13:51.115901  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:13:51.135907  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:13:51.158937  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 03:13:51.181743  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:13:51.201818  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:13:51.221017  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:13:51.238538  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:13:51.256409  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem --> /usr/share/ca-certificates/8429.pem (1338 bytes)
	I1124 03:13:51.274062  261872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /usr/share/ca-certificates/84292.pem (1708 bytes)
	I1124 03:13:51.294048  261872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:13:51.307329  261872 ssh_runner.go:195] Run: openssl version
	I1124 03:13:51.314329  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8429.pem && ln -fs /usr/share/ca-certificates/8429.pem /etc/ssl/certs/8429.pem"
	I1124 03:13:51.323326  261872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8429.pem
	I1124 03:13:51.327467  261872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/8429.pem
	I1124 03:13:51.327550  261872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8429.pem
	I1124 03:13:51.364023  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8429.pem /etc/ssl/certs/51391683.0"
	I1124 03:13:51.372767  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84292.pem && ln -fs /usr/share/ca-certificates/84292.pem /etc/ssl/certs/84292.pem"
	I1124 03:13:51.382195  261872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84292.pem
	I1124 03:13:51.386199  261872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/84292.pem
	I1124 03:13:51.386248  261872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84292.pem
	I1124 03:13:51.431567  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:13:51.444377  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:13:51.453471  261872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:51.457950  261872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:51.458004  261872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:51.492858  261872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:13:51.502686  261872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:13:51.506868  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:13:51.546231  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:13:51.581525  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:13:51.625966  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:13:51.677213  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:13:51.733224  261872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:13:51.785952  261872 kubeadm.go:401] StartCluster: {Name:old-k8s-version-838815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-838815 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:51.786067  261872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:13:51.786180  261872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:13:51.834133  261872 cri.go:89] found id: "fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e"
	I1124 03:13:51.834152  261872 cri.go:89] found id: "d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87"
	I1124 03:13:51.834170  261872 cri.go:89] found id: "ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15"
	I1124 03:13:51.834174  261872 cri.go:89] found id: "a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe"
	I1124 03:13:51.834176  261872 cri.go:89] found id: "9c967be1346874a3d082ab04f13f5fb619eecacf5fb7ad188245ab5e7fe1fd39"
	I1124 03:13:51.834190  261872 cri.go:89] found id: "d417c8d3e50280e381cd48b9133ff9b7eee5647f3de99e210052408619e7a770"
	I1124 03:13:51.834193  261872 cri.go:89] found id: "da6efdd3aa62d69f1d169afe237a09597925d965af4ae63cb4a3d5c4fdec4a9e"
	I1124 03:13:51.834196  261872 cri.go:89] found id: "5252475449db61ed023b07a2c7783bea6f77e7aad8afe357a282907f58383b49"
	I1124 03:13:51.834198  261872 cri.go:89] found id: "ba673dc701109bf125ff9985c0914f2ba2109e73d86e870cceda5494df539e38"
	I1124 03:13:51.834205  261872 cri.go:89] found id: "6d5b31c71edc46daad185ace0e1d3f5ec67dd2787b6d503af150ed6b776dd725"
	I1124 03:13:51.834207  261872 cri.go:89] found id: "6d6e12d242d5e9f46758e6fc6e8d424eb9bd8d2f091a9c6be9a834d07c08f917"
	I1124 03:13:51.834209  261872 cri.go:89] found id: "f861f902328c35216c5237199b026c1c5955de0259a65cb749000ef69844ea95"
	I1124 03:13:51.834212  261872 cri.go:89] found id: ""
	I1124 03:13:51.834267  261872 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 03:13:51.865491  261872 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95","pid":808,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95/rootfs","created":"2025-11-24T03:13:51.660061523Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-838815_927cbff391bb332f43f45f26699862ae","io.kubernetes.cri.sandbox-memory":"0","
io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"927cbff391bb332f43f45f26699862ae"},"owner":"root"},{"ociVersion":"1.2.1","id":"a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe","pid":924,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe/rootfs","created":"2025-11-24T03:13:51.777907031Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.c
ri.sandbox-uid":"927cbff391bb332f43f45f26699862ae"},"owner":"root"},{"ociVersion":"1.2.1","id":"aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e","pid":861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e/rootfs","created":"2025-11-24T03:13:51.697714244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-838815_59d28715e65b26ba92b75a322d154274","io.kubernetes.cri.sa
ndbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"59d28715e65b26ba92b75a322d154274"},"owner":"root"},{"ociVersion":"1.2.1","id":"ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15/rootfs","created":"2025-11-24T03:13:51.792630729Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.0","io.kubernetes.cri.sandbox-id":"fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-838815","io.kuber
netes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b037717515dec83b45dc7eca1e2db0bb"},"owner":"root"},{"ociVersion":"1.2.1","id":"d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87","pid":965,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87/rootfs","created":"2025-11-24T03:13:51.810097482Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.0","io.kubernetes.cri.sandbox-id":"aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"59d28715e65b26ba92b75a32
2d154274"},"owner":"root"},{"ociVersion":"1.2.1","id":"fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515","pid":823,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515/rootfs","created":"2025-11-24T03:13:51.662600956Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-838815_b037717515dec83b45dc7eca1e2db0bb","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sand
box-name":"kube-apiserver-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b037717515dec83b45dc7eca1e2db0bb"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e","pid":972,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e/rootfs","created":"2025-11-24T03:13:51.820912641Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.0","io.kubernetes.cri.sandbox-id":"fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-838815","io.kubernetes.cri.sand
box-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9245bbeddbc02ca342af19af610818c6"},"owner":"root"},{"ociVersion":"1.2.1","id":"fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca","pid":863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca/rootfs","created":"2025-11-24T03:13:51.699900586Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-838815_9
245bbeddbc02ca342af19af610818c6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-838815","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9245bbeddbc02ca342af19af610818c6"},"owner":"root"}]
	I1124 03:13:51.865693  261872 cri.go:126] list returned 8 containers
	I1124 03:13:51.865718  261872 cri.go:129] container: {ID:02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95 Status:running}
	I1124 03:13:51.865741  261872 cri.go:131] skipping 02413546fc41f5b800fb35290b6e432ceb6f34bcd96bdedb324b2ee849199c95 - not in ps
	I1124 03:13:51.865753  261872 cri.go:129] container: {ID:a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe Status:running}
	I1124 03:13:51.865763  261872 cri.go:135] skipping {a42f904b13af808a4635594fcbc05f51d10523e1395a305ac77d263dc68e56fe running}: state = "running", want "paused"
	I1124 03:13:51.865801  261872 cri.go:129] container: {ID:aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e Status:running}
	I1124 03:13:51.865810  261872 cri.go:131] skipping aa9bf22ca90bb4dee53de833323b3f417656a884d0d129ef1cd95b424152903e - not in ps
	I1124 03:13:51.865815  261872 cri.go:129] container: {ID:ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15 Status:running}
	I1124 03:13:51.865822  261872 cri.go:135] skipping {ad6fe29a193921e5500399fb1cd74cb294bc8ca63b2ccf3aadb5dc7f28382e15 running}: state = "running", want "paused"
	I1124 03:13:51.865829  261872 cri.go:129] container: {ID:d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87 Status:running}
	I1124 03:13:51.865836  261872 cri.go:135] skipping {d1d68ceed01d35fb40c6c7d9b864ed747b3c699ffdb4016ec6a78ae1448d9a87 running}: state = "running", want "paused"
	I1124 03:13:51.865842  261872 cri.go:129] container: {ID:fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515 Status:running}
	I1124 03:13:51.865847  261872 cri.go:131] skipping fdfa740c2c429845fa43b72ae75fa21c361ab14d57941a3e0fc8569b837dc515 - not in ps
	I1124 03:13:51.865854  261872 cri.go:129] container: {ID:fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e Status:created}
	I1124 03:13:51.865860  261872 cri.go:135] skipping {fe5729b68274c0b8298033780db8e598f4fe68462447e990067ef8b90912c08e created}: state = "created", want "paused"
	I1124 03:13:51.865867  261872 cri.go:129] container: {ID:fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca Status:running}
	I1124 03:13:51.865874  261872 cri.go:131] skipping fff8fa283e5ca297703ce22be470dfc00c7044c838c832f7eaa5ee1651f781ca - not in ps
	I1124 03:13:51.865924  261872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:13:51.877803  261872 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:13:51.877825  261872 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:13:51.877872  261872 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:13:51.887574  261872 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:13:51.888333  261872 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-838815" does not appear in /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:13:51.888824  261872 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-4883/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-838815" cluster setting kubeconfig missing "old-k8s-version-838815" context setting]
	I1124 03:13:51.889590  261872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:51.891510  261872 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:13:51.902318  261872 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 03:13:51.902378  261872 kubeadm.go:602] duration metric: took 24.546025ms to restartPrimaryControlPlane
	I1124 03:13:51.902409  261872 kubeadm.go:403] duration metric: took 116.453218ms to StartCluster
	I1124 03:13:51.902430  261872 settings.go:142] acquiring lock: {Name:mk05d84efd831d60555ea716cd9d2a0a41871249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:51.902506  261872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:13:51.903625  261872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:51.903885  261872 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:13:51.904110  261872 config.go:182] Loaded profile config "old-k8s-version-838815": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:13:51.904121  261872 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:13:51.904217  261872 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-838815"
	I1124 03:13:51.904234  261872 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-838815"
	W1124 03:13:51.904243  261872 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:13:51.904243  261872 addons.go:70] Setting dashboard=true in profile "old-k8s-version-838815"
	I1124 03:13:51.904255  261872 addons.go:239] Setting addon dashboard=true in "old-k8s-version-838815"
	W1124 03:13:51.904269  261872 addons.go:248] addon dashboard should already be in state true
	I1124 03:13:51.904292  261872 host.go:66] Checking if "old-k8s-version-838815" exists ...
	I1124 03:13:51.904328  261872 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-838815"
	I1124 03:13:51.904342  261872 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-838815"
	W1124 03:13:51.904349  261872 addons.go:248] addon metrics-server should already be in state true
	I1124 03:13:51.904371  261872 host.go:66] Checking if "old-k8s-version-838815" exists ...
	I1124 03:13:51.904229  261872 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-838815"
	I1124 03:13:51.904445  261872 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-838815"
	I1124 03:13:51.904730  261872 host.go:66] Checking if "old-k8s-version-838815" exists ...
	I1124 03:13:51.904791  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.904866  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.905157  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.905172  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.907972  261872 out.go:179] * Verifying Kubernetes components...
	I1124 03:13:51.911888  261872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:51.934792  261872 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-838815"
	W1124 03:13:51.934822  261872 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:13:51.934852  261872 host.go:66] Checking if "old-k8s-version-838815" exists ...
	I1124 03:13:51.935311  261872 cli_runner.go:164] Run: docker container inspect old-k8s-version-838815 --format={{.State.Status}}
	I1124 03:13:51.940160  261872 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:13:51.940318  261872 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:13:51.941580  261872 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:13:51.941604  261872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:13:51.941654  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:51.941577  261872 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 03:13:51.942826  261872 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 03:13:51.942845  261872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 03:13:51.942915  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:51.943004  261872 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:13:51.377450  256790 out.go:252]   - Booting up control plane ...
	I1124 03:13:51.377575  256790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:13:51.377695  256790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:13:51.378229  256790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:13:51.394496  256790 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:13:51.394675  256790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:13:51.402191  256790 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:13:51.402335  256790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:13:51.402405  256790 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:13:51.509919  256790 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:13:51.510064  256790 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:13:52.511837  256790 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001905438s
	I1124 03:13:52.515714  256790 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:13:52.515880  256790 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:13:52.516030  256790 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:13:52.516146  256790 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:13:51.944113  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:13:51.944128  261872 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:13:51.944190  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:51.965055  261872 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:13:51.965084  261872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:13:51.965149  261872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-838815
	I1124 03:13:51.983416  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:51.986881  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:51.987120  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:52.009353  261872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/old-k8s-version-838815/id_rsa Username:docker}
	I1124 03:13:52.110916  261872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:52.115060  261872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:13:52.130676  261872 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-838815" to be "Ready" ...
	I1124 03:13:52.135712  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:13:52.135735  261872 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:13:52.137139  261872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:13:52.139822  261872 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 03:13:52.139839  261872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 03:13:52.158316  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:13:52.158389  261872 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:13:52.168093  261872 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 03:13:52.168792  261872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 03:13:52.179507  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:13:52.179531  261872 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:13:52.195853  261872 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:13:52.195876  261872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 03:13:52.229574  261872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:13:52.241492  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:13:52.241517  261872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:13:52.268734  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:13:52.268757  261872 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:13:52.284063  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:13:52.284089  261872 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:13:52.300059  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:13:52.300087  261872 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:13:52.319441  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:13:52.319463  261872 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:13:52.333000  261872 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:13:52.333024  261872 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:13:52.347840  261872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:13:54.215012  261872 node_ready.go:49] node "old-k8s-version-838815" is "Ready"
	I1124 03:13:54.215045  261872 node_ready.go:38] duration metric: took 2.084340625s for node "old-k8s-version-838815" to be "Ready" ...
	I1124 03:13:54.215061  261872 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:13:54.215114  261872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:13:55.124393  261872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.009301652s)
	I1124 03:13:55.124478  261872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.987316377s)
	I1124 03:13:55.124539  261872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.894935129s)
	I1124 03:13:55.124565  261872 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-838815"
	I1124 03:13:55.609379  261872 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.394243392s)
	I1124 03:13:55.609422  261872 api_server.go:72] duration metric: took 3.70545074s to wait for apiserver process to appear ...
	I1124 03:13:55.609430  261872 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:13:55.609451  261872 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:13:55.609959  261872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.262059343s)
	I1124 03:13:55.611233  261872 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-838815 addons enable metrics-server
	
	I1124 03:13:55.612916  261872 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1124 03:13:52.729859  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:52.730282  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:52.730330  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:52.730379  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:52.770262  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:52.770287  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:52.770295  222154 cri.go:89] found id: ""
	I1124 03:13:52.770304  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:52.770423  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.776345  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.782454  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:52.782558  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:52.822065  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:52.822089  222154 cri.go:89] found id: ""
	I1124 03:13:52.822108  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:52.822162  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.827509  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:52.827582  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:52.864305  222154 cri.go:89] found id: ""
	I1124 03:13:52.864331  222154 logs.go:282] 0 containers: []
	W1124 03:13:52.864341  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:52.864356  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:52.864413  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:52.899888  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:52.899918  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:52.899926  222154 cri.go:89] found id: ""
	I1124 03:13:52.899936  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:52.900000  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.905926  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.911101  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:52.911273  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:52.945999  222154 cri.go:89] found id: ""
	I1124 03:13:52.946026  222154 logs.go:282] 0 containers: []
	W1124 03:13:52.946036  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:52.946044  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:52.946101  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:52.980910  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:52.980935  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:52.980941  222154 cri.go:89] found id: ""
	I1124 03:13:52.980950  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:52.981009  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.986708  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:52.991216  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:52.991291  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:53.029761  222154 cri.go:89] found id: ""
	I1124 03:13:53.029808  222154 logs.go:282] 0 containers: []
	W1124 03:13:53.029822  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:53.029830  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:53.029888  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:53.063730  222154 cri.go:89] found id: ""
	I1124 03:13:53.063753  222154 logs.go:282] 0 containers: []
	W1124 03:13:53.063761  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:53.063770  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:53.063794  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:53.097201  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:53.097230  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:53.143200  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:53.143227  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:53.209559  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:53.209596  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:53.262124  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:53.262154  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:53.305178  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:53.305212  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:53.371116  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:53.371155  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:53.410310  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:53.410338  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:53.550704  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:53.550741  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:53.567739  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:53.567801  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:53.645518  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:53.645546  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:53.645561  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:53.693920  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:53.693953  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:55.147244  256790 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.631504123s
	I1124 03:13:55.673486  256790 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.157794998s
	I1124 03:13:57.517908  256790 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002124183s
	I1124 03:13:57.529050  256790 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:13:57.539437  256790 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:13:57.547894  256790 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:13:57.548209  256790 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-182765 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:13:57.554385  256790 kubeadm.go:319] [bootstrap-token] Using token: 4gg6pq.7a7gneeh21qubvs3
	I1124 03:13:57.555734  256790 out.go:252]   - Configuring RBAC rules ...
	I1124 03:13:57.555980  256790 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:13:57.560138  256790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:13:57.565840  256790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:13:57.569730  256790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:13:57.572050  256790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:13:57.574156  256790 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:13:57.923390  256790 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:13:58.339738  256790 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:13:58.923418  256790 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:13:58.926055  256790 kubeadm.go:319] 
	I1124 03:13:58.926165  256790 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:13:58.926175  256790 kubeadm.go:319] 
	I1124 03:13:58.926306  256790 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:13:58.926319  256790 kubeadm.go:319] 
	I1124 03:13:58.926365  256790 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:13:58.926430  256790 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:13:58.926494  256790 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:13:58.926502  256790 kubeadm.go:319] 
	I1124 03:13:58.926557  256790 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:13:58.926567  256790 kubeadm.go:319] 
	I1124 03:13:58.926627  256790 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:13:58.926659  256790 kubeadm.go:319] 
	I1124 03:13:58.926730  256790 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:13:58.926856  256790 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:13:58.926919  256790 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:13:58.926939  256790 kubeadm.go:319] 
	I1124 03:13:58.927057  256790 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:13:58.927163  256790 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:13:58.927172  256790 kubeadm.go:319] 
	I1124 03:13:58.927278  256790 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4gg6pq.7a7gneeh21qubvs3 \
	I1124 03:13:58.927410  256790 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5e943442c508de754e907135e9f68708045a0a18fa82619a148153bf802a361b \
	I1124 03:13:58.927443  256790 kubeadm.go:319] 	--control-plane 
	I1124 03:13:58.927452  256790 kubeadm.go:319] 
	I1124 03:13:58.927565  256790 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:13:58.927575  256790 kubeadm.go:319] 
	I1124 03:13:58.927689  256790 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4gg6pq.7a7gneeh21qubvs3 \
	I1124 03:13:58.927869  256790 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5e943442c508de754e907135e9f68708045a0a18fa82619a148153bf802a361b 
	I1124 03:13:58.930004  256790 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:13:58.930109  256790 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:13:58.930138  256790 cni.go:84] Creating CNI manager for ""
	I1124 03:13:58.930148  256790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:13:58.932398  256790 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:13:55.614396  261872 addons.go:530] duration metric: took 3.710278081s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1124 03:13:55.615674  261872 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:13:55.617606  261872 api_server.go:141] control plane version: v1.28.0
	I1124 03:13:55.617634  261872 api_server.go:131] duration metric: took 8.19655ms to wait for apiserver health ...
	I1124 03:13:55.617645  261872 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:13:55.623812  261872 system_pods.go:59] 9 kube-system pods found
	I1124 03:13:55.623863  261872 system_pods.go:61] "coredns-5dd5756b68-gfsqm" [afa1f94c-8c55-4847-9152-189f27ff812a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:13:55.623878  261872 system_pods.go:61] "etcd-old-k8s-version-838815" [6bbc2335-d9af-448e-87e7-2179d5b28065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:13:55.623898  261872 system_pods.go:61] "kindnet-rvm46" [f375e199-56a3-44e4-97fb-08f38dc56b33] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:13:55.623914  261872 system_pods.go:61] "kube-apiserver-old-k8s-version-838815" [392c3bef-1022-4055-96e3-cb0a96f804a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:13:55.623933  261872 system_pods.go:61] "kube-controller-manager-old-k8s-version-838815" [73e96a09-3a84-4bb8-8e3c-4c9804d0e497] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:13:55.623948  261872 system_pods.go:61] "kube-proxy-cz68g" [d975541d-c6d9-4d84-8dc6-4ee5db7a575f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:13:55.623956  261872 system_pods.go:61] "kube-scheduler-old-k8s-version-838815" [065763c2-fe08-4d07-9851-171461f47d49] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:13:55.623965  261872 system_pods.go:61] "metrics-server-57f55c9bc5-4qm94" [bca03fa8-7c45-489c-b2fc-5834243ab91c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:13:55.623975  261872 system_pods.go:61] "storage-provisioner" [1dc12010-009c-4a23-af68-7bbba3679259] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:13:55.623990  261872 system_pods.go:74] duration metric: took 6.331708ms to wait for pod list to return data ...
	I1124 03:13:55.624007  261872 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:13:55.627151  261872 default_sa.go:45] found service account: "default"
	I1124 03:13:55.627176  261872 default_sa.go:55] duration metric: took 3.16223ms for default service account to be created ...
	I1124 03:13:55.627186  261872 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:13:55.638577  261872 system_pods.go:86] 9 kube-system pods found
	I1124 03:13:55.638621  261872 system_pods.go:89] "coredns-5dd5756b68-gfsqm" [afa1f94c-8c55-4847-9152-189f27ff812a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:13:55.638636  261872 system_pods.go:89] "etcd-old-k8s-version-838815" [6bbc2335-d9af-448e-87e7-2179d5b28065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:13:55.638649  261872 system_pods.go:89] "kindnet-rvm46" [f375e199-56a3-44e4-97fb-08f38dc56b33] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:13:55.638666  261872 system_pods.go:89] "kube-apiserver-old-k8s-version-838815" [392c3bef-1022-4055-96e3-cb0a96f804a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:13:55.638676  261872 system_pods.go:89] "kube-controller-manager-old-k8s-version-838815" [73e96a09-3a84-4bb8-8e3c-4c9804d0e497] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:13:55.638692  261872 system_pods.go:89] "kube-proxy-cz68g" [d975541d-c6d9-4d84-8dc6-4ee5db7a575f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:13:55.638701  261872 system_pods.go:89] "kube-scheduler-old-k8s-version-838815" [065763c2-fe08-4d07-9851-171461f47d49] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:13:55.638715  261872 system_pods.go:89] "metrics-server-57f55c9bc5-4qm94" [bca03fa8-7c45-489c-b2fc-5834243ab91c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:13:55.638722  261872 system_pods.go:89] "storage-provisioner" [1dc12010-009c-4a23-af68-7bbba3679259] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:13:55.638738  261872 system_pods.go:126] duration metric: took 11.545197ms to wait for k8s-apps to be running ...
	I1124 03:13:55.638749  261872 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:13:55.638817  261872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:55.663974  261872 system_svc.go:56] duration metric: took 25.216876ms WaitForService to wait for kubelet
	I1124 03:13:55.664014  261872 kubeadm.go:587] duration metric: took 3.760044799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:55.664038  261872 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:13:55.669975  261872 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:13:55.670020  261872 node_conditions.go:123] node cpu capacity is 8
	I1124 03:13:55.670042  261872 node_conditions.go:105] duration metric: took 5.99814ms to run NodePressure ...
	I1124 03:13:55.670059  261872 start.go:242] waiting for startup goroutines ...
	I1124 03:13:55.670068  261872 start.go:247] waiting for cluster config update ...
	I1124 03:13:55.670083  261872 start.go:256] writing updated cluster config ...
	I1124 03:13:55.670575  261872 ssh_runner.go:195] Run: rm -f paused
	I1124 03:13:55.676895  261872 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:13:55.682948  261872 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gfsqm" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:13:57.689262  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:13:59.689521  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:13:56.236082  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:56.236478  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:56.236527  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:56.236569  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:56.267461  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:56.267479  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:56.267483  222154 cri.go:89] found id: ""
	I1124 03:13:56.267490  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:56.267539  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.272263  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.279717  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:56.279814  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:56.316735  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:56.316763  222154 cri.go:89] found id: ""
	I1124 03:13:56.316772  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:56.316841  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.322328  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:56.322412  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:56.357228  222154 cri.go:89] found id: ""
	I1124 03:13:56.357257  222154 logs.go:282] 0 containers: []
	W1124 03:13:56.357269  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:56.357276  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:56.357332  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:56.383314  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:56.383337  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:56.383342  222154 cri.go:89] found id: ""
	I1124 03:13:56.383350  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:56.383405  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.387531  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.391426  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:56.391491  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:56.418050  222154 cri.go:89] found id: ""
	I1124 03:13:56.418074  222154 logs.go:282] 0 containers: []
	W1124 03:13:56.418084  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:56.418090  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:56.418139  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:56.444046  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:56.444065  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:56.444070  222154 cri.go:89] found id: ""
	I1124 03:13:56.444080  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:56.444136  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.448167  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:56.451808  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:56.451857  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:56.476763  222154 cri.go:89] found id: ""
	I1124 03:13:56.476795  222154 logs.go:282] 0 containers: []
	W1124 03:13:56.476805  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:56.476813  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:56.476862  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:56.502409  222154 cri.go:89] found id: ""
	I1124 03:13:56.502435  222154 logs.go:282] 0 containers: []
	W1124 03:13:56.502444  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:56.502455  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:56.502476  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:56.558000  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:56.558026  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:13:56.558043  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:56.590347  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:13:56.590377  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:56.629340  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:13:56.629377  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:56.692398  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:13:56.692436  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:56.725794  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:13:56.725822  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:56.767008  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:56.767040  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:56.806637  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:13:56.806666  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:56.846682  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:13:56.846709  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:13:56.899795  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:13:56.899831  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:13:56.934323  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:56.934353  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:57.054732  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:57.054764  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:59.572502  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:13:59.573017  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:13:59.573064  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:13:59.573114  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:13:59.601228  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:13:59.601247  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:13:59.601251  222154 cri.go:89] found id: ""
	I1124 03:13:59.601260  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:13:59.601320  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.605366  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.609257  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:13:59.609318  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:13:59.635336  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:13:59.635363  222154 cri.go:89] found id: ""
	I1124 03:13:59.635376  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:13:59.635505  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.640364  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:13:59.640430  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:13:59.667097  222154 cri.go:89] found id: ""
	I1124 03:13:59.667122  222154 logs.go:282] 0 containers: []
	W1124 03:13:59.667129  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:13:59.667136  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:13:59.667190  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:13:59.695992  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:13:59.696015  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:13:59.696020  222154 cri.go:89] found id: ""
	I1124 03:13:59.696028  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:13:59.696080  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.700222  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.703970  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:13:59.704022  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:13:59.728834  222154 cri.go:89] found id: ""
	I1124 03:13:59.728861  222154 logs.go:282] 0 containers: []
	W1124 03:13:59.728870  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:13:59.728877  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:13:59.728933  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:13:59.757314  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:59.757339  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:13:59.757345  222154 cri.go:89] found id: ""
	I1124 03:13:59.757354  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:13:59.757403  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.761682  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:13:59.766233  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:13:59.766297  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:13:59.798732  222154 cri.go:89] found id: ""
	I1124 03:13:59.798756  222154 logs.go:282] 0 containers: []
	W1124 03:13:59.798766  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:13:59.798783  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:13:59.798843  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:13:59.828107  222154 cri.go:89] found id: ""
	I1124 03:13:59.828128  222154 logs.go:282] 0 containers: []
	W1124 03:13:59.828135  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:13:59.828144  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:13:59.828155  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:13:59.921372  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:13:59.921404  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:13:59.935541  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:13:59.935570  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:13:59.996288  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:13:59.996308  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:13:59.996320  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:00.030411  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:00.030443  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:00.083730  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:00.083767  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:00.117527  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:00.117557  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:00.162202  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:00.162231  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:00.195840  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:14:00.195865  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:00.226785  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:00.226815  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:00.261107  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:00.261133  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:00.300154  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:00.300182  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:13:58.933554  256790 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:13:58.938576  256790 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:13:58.938594  256790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:13:58.952039  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:13:59.166247  256790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:13:59.166337  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-182765 minikube.k8s.io/updated_at=2025_11_24T03_13_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-182765 minikube.k8s.io/primary=true
	I1124 03:13:59.166342  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:13:59.176885  256790 ops.go:34] apiserver oom_adj: -16
	I1124 03:13:59.246724  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:13:59.747124  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:00.247534  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:00.746933  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:01.246841  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:01.747137  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:02.246868  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:02.747050  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:03.246962  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:03.747672  256790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:03.814528  256790 kubeadm.go:1114] duration metric: took 4.648257718s to wait for elevateKubeSystemPrivileges
	I1124 03:14:03.814569  256790 kubeadm.go:403] duration metric: took 16.563608532s to StartCluster
	I1124 03:14:03.814590  256790 settings.go:142] acquiring lock: {Name:mk05d84efd831d60555ea716cd9d2a0a41871249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.814662  256790 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:14:03.817002  256790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.817278  256790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:14:03.817293  256790 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:14:03.817402  256790 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:14:03.817506  256790 addons.go:70] Setting storage-provisioner=true in profile "no-preload-182765"
	I1124 03:14:03.817515  256790 addons.go:70] Setting default-storageclass=true in profile "no-preload-182765"
	I1124 03:14:03.817526  256790 addons.go:239] Setting addon storage-provisioner=true in "no-preload-182765"
	I1124 03:14:03.817542  256790 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-182765"
	I1124 03:14:03.817552  256790 config.go:182] Loaded profile config "no-preload-182765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:14:03.817557  256790 host.go:66] Checking if "no-preload-182765" exists ...
	I1124 03:14:03.817978  256790 cli_runner.go:164] Run: docker container inspect no-preload-182765 --format={{.State.Status}}
	I1124 03:14:03.818122  256790 cli_runner.go:164] Run: docker container inspect no-preload-182765 --format={{.State.Status}}
	I1124 03:14:03.819508  256790 out.go:179] * Verifying Kubernetes components...
	I1124 03:14:03.820743  256790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:14:03.848100  256790 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:14:03.849349  256790 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:14:03.849368  256790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:14:03.849424  256790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-182765
	I1124 03:14:03.850444  256790 addons.go:239] Setting addon default-storageclass=true in "no-preload-182765"
	I1124 03:14:03.850489  256790 host.go:66] Checking if "no-preload-182765" exists ...
	I1124 03:14:03.850984  256790 cli_runner.go:164] Run: docker container inspect no-preload-182765 --format={{.State.Status}}
	I1124 03:14:03.882640  256790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/no-preload-182765/id_rsa Username:docker}
	I1124 03:14:03.888690  256790 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:14:03.888714  256790 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:14:03.888824  256790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-182765
	I1124 03:14:03.911485  256790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/no-preload-182765/id_rsa Username:docker}
	I1124 03:14:03.927355  256790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:14:03.975885  256790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:14:04.003884  256790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:14:04.024866  256790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:14:04.118789  256790 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:14:04.119847  256790 node_ready.go:35] waiting up to 6m0s for node "no-preload-182765" to be "Ready" ...
	I1124 03:14:04.330452  256790 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 03:14:02.188985  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:04.189085  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:02.831564  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:14:02.831997  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:14:02.832054  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:14:02.832102  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:14:02.858999  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:02.859020  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:02.859027  222154 cri.go:89] found id: ""
	I1124 03:14:02.859034  222154 logs.go:282] 2 containers: [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:14:02.859095  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.863144  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.866827  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:14:02.866895  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:14:02.894574  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:02.894592  222154 cri.go:89] found id: ""
	I1124 03:14:02.894599  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:14:02.894643  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.898881  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:14:02.898946  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:14:02.925658  222154 cri.go:89] found id: ""
	I1124 03:14:02.925683  222154 logs.go:282] 0 containers: []
	W1124 03:14:02.925693  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:14:02.925700  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:14:02.925761  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:14:02.952756  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:02.952807  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:02.952814  222154 cri.go:89] found id: ""
	I1124 03:14:02.952824  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:14:02.952872  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.956856  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:02.960582  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:14:02.960636  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:14:02.988059  222154 cri.go:89] found id: ""
	I1124 03:14:02.988082  222154 logs.go:282] 0 containers: []
	W1124 03:14:02.988089  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:14:02.988094  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:14:02.988143  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:14:03.016143  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:03.016181  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:03.016186  222154 cri.go:89] found id: ""
	I1124 03:14:03.016196  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:14:03.016247  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:03.020163  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:03.024013  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:14:03.024082  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:14:03.052755  222154 cri.go:89] found id: ""
	I1124 03:14:03.052790  222154 logs.go:282] 0 containers: []
	W1124 03:14:03.052801  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:14:03.052809  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:14:03.052868  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:14:03.078674  222154 cri.go:89] found id: ""
	I1124 03:14:03.078694  222154 logs.go:282] 0 containers: []
	W1124 03:14:03.078700  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:14:03.078713  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:03.078724  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:03.132465  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:03.132494  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:03.177122  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:03.177154  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:03.211154  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:14:03.211178  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:14:03.226137  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:14:03.226166  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:14:03.290769  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:14:03.290809  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:14:03.290825  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:03.330663  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:03.330693  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:03.367605  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:03.367634  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:03.395989  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:03.396020  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:03.431222  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:14:03.431267  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:14:03.537842  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:14:03.537878  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:03.572333  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:03.572364  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:06.112455  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:14:04.331391  256790 addons.go:530] duration metric: took 513.991849ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:14:04.622945  256790 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-182765" context rescaled to 1 replicas
	W1124 03:14:06.122987  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:06.693286  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:09.189222  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:11.117117  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 03:14:11.117189  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:14:11.117261  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:14:11.150063  222154 cri.go:89] found id: "e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:11.150086  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:11.150092  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:11.150096  222154 cri.go:89] found id: ""
	I1124 03:14:11.150105  222154 logs.go:282] 3 containers: [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:14:11.150166  222154 ssh_runner.go:195] Run: which crictl
	W1124 03:14:08.123105  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:10.623117  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:12.623279  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:11.189944  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:13.688591  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:11.155062  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.159119  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.163515  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:14:11.163583  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:14:11.196356  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:11.196398  222154 cri.go:89] found id: ""
	I1124 03:14:11.196409  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:14:11.196465  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.201060  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:14:11.201126  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:14:11.232445  222154 cri.go:89] found id: ""
	I1124 03:14:11.232472  222154 logs.go:282] 0 containers: []
	W1124 03:14:11.232482  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:14:11.232490  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:14:11.232556  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:14:11.263992  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:11.264013  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:11.264017  222154 cri.go:89] found id: ""
	I1124 03:14:11.264024  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:14:11.264081  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.268463  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.272372  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:14:11.272421  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:14:11.302039  222154 cri.go:89] found id: ""
	I1124 03:14:11.302062  222154 logs.go:282] 0 containers: []
	W1124 03:14:11.302069  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:14:11.302077  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:14:11.302123  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:14:11.335864  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:11.335888  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:11.335893  222154 cri.go:89] found id: ""
	I1124 03:14:11.335901  222154 logs.go:282] 2 containers: [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:14:11.335956  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.340998  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:11.346060  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:14:11.346128  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:14:11.383326  222154 cri.go:89] found id: ""
	I1124 03:14:11.383357  222154 logs.go:282] 0 containers: []
	W1124 03:14:11.383369  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:14:11.383378  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:14:11.383439  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:14:11.413051  222154 cri.go:89] found id: ""
	I1124 03:14:11.413076  222154 logs.go:282] 0 containers: []
	W1124 03:14:11.413084  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:14:11.413093  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:14:11.413103  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:14:11.427750  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:14:11.427852  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:11.464164  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:11.464196  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:11.498481  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:11.498508  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:11.527378  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:11.527403  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:11.566711  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:14:11.566740  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:14:15.123100  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:17.622462  256790 node_ready.go:57] node "no-preload-182765" has "Ready":"False" status (will retry)
	W1124 03:14:15.688863  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:18.188648  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:18.122398  256790 node_ready.go:49] node "no-preload-182765" is "Ready"
	I1124 03:14:18.122427  256790 node_ready.go:38] duration metric: took 14.002519282s for node "no-preload-182765" to be "Ready" ...
	I1124 03:14:18.122445  256790 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:14:18.122498  256790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:14:18.135618  256790 api_server.go:72] duration metric: took 14.318291491s to wait for apiserver process to appear ...
	I1124 03:14:18.135648  256790 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:14:18.135693  256790 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:14:18.140684  256790 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:14:18.141588  256790 api_server.go:141] control plane version: v1.34.1
	I1124 03:14:18.141609  256790 api_server.go:131] duration metric: took 5.953987ms to wait for apiserver health ...
	I1124 03:14:18.141618  256790 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:14:18.147887  256790 system_pods.go:59] 8 kube-system pods found
	I1124 03:14:18.147923  256790 system_pods.go:61] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:18.147930  256790 system_pods.go:61] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:18.147937  256790 system_pods.go:61] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:18.147949  256790 system_pods.go:61] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:18.147955  256790 system_pods.go:61] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:18.147959  256790 system_pods.go:61] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:18.147963  256790 system_pods.go:61] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:18.147969  256790 system_pods.go:61] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:18.147976  256790 system_pods.go:74] duration metric: took 6.352644ms to wait for pod list to return data ...
	I1124 03:14:18.147984  256790 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:14:18.150951  256790 default_sa.go:45] found service account: "default"
	I1124 03:14:18.151027  256790 default_sa.go:55] duration metric: took 3.035625ms for default service account to be created ...
	I1124 03:14:18.151038  256790 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:14:18.156382  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:18.156421  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:18.156429  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:18.156449  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:18.156456  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:18.156468  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:18.156474  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:18.156480  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:18.156487  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:18.156510  256790 retry.go:31] will retry after 296.995151ms: missing components: kube-dns
	I1124 03:14:18.457756  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:18.457829  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:18.457842  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:18.457851  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:18.457857  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:18.457862  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:18.457866  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:18.457870  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:18.457878  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:18.457892  256790 retry.go:31] will retry after 311.207422ms: missing components: kube-dns
	I1124 03:14:18.772742  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:18.772795  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:18.772801  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:18.772807  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:18.772810  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:18.772815  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:18.772818  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:18.772821  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:18.772827  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:18.772844  256790 retry.go:31] will retry after 451.19412ms: missing components: kube-dns
	I1124 03:14:19.227762  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:19.227802  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:14:19.227808  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:19.227815  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:19.227819  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:19.227823  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:19.227826  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:19.227829  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:19.227834  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:14:19.227850  256790 retry.go:31] will retry after 607.556874ms: missing components: kube-dns
	I1124 03:14:19.839632  256790 system_pods.go:86] 8 kube-system pods found
	I1124 03:14:19.839665  256790 system_pods.go:89] "coredns-66bc5c9577-lcrl8" [3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7] Running
	I1124 03:14:19.839672  256790 system_pods.go:89] "etcd-no-preload-182765" [b38360ae-6e1c-4f7d-8529-5f3dfe9431d1] Running
	I1124 03:14:19.839676  256790 system_pods.go:89] "kindnet-ncvw4" [6d2a43f2-69e3-4768-8e15-39fbe53d92f9] Running
	I1124 03:14:19.839680  256790 system_pods.go:89] "kube-apiserver-no-preload-182765" [a9443b37-da68-4a37-bd93-497df769c9af] Running
	I1124 03:14:19.839684  256790 system_pods.go:89] "kube-controller-manager-no-preload-182765" [7735413f-2120-4660-be51-b157a8e1e9fa] Running
	I1124 03:14:19.839687  256790 system_pods.go:89] "kube-proxy-fx42v" [4c8c52d6-d4fd-4be2-8246-f96d95997a62] Running
	I1124 03:14:19.839691  256790 system_pods.go:89] "kube-scheduler-no-preload-182765" [202684ee-474e-4f60-afa0-b5ddabf71edc] Running
	I1124 03:14:19.839694  256790 system_pods.go:89] "storage-provisioner" [271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1] Running
	I1124 03:14:19.839701  256790 system_pods.go:126] duration metric: took 1.688658372s to wait for k8s-apps to be running ...
	I1124 03:14:19.839712  256790 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:14:19.839755  256790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:14:19.852890  256790 system_svc.go:56] duration metric: took 13.168343ms WaitForService to wait for kubelet
	I1124 03:14:19.852957  256790 kubeadm.go:587] duration metric: took 16.035598027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:14:19.852979  256790 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:14:19.855699  256790 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:14:19.855728  256790 node_conditions.go:123] node cpu capacity is 8
	I1124 03:14:19.855745  256790 node_conditions.go:105] duration metric: took 2.761809ms to run NodePressure ...
	I1124 03:14:19.855792  256790 start.go:242] waiting for startup goroutines ...
	I1124 03:14:19.855806  256790 start.go:247] waiting for cluster config update ...
	I1124 03:14:19.855819  256790 start.go:256] writing updated cluster config ...
	I1124 03:14:19.856129  256790 ssh_runner.go:195] Run: rm -f paused
	I1124 03:14:19.861135  256790 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:14:19.865065  256790 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lcrl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.869262  256790 pod_ready.go:94] pod "coredns-66bc5c9577-lcrl8" is "Ready"
	I1124 03:14:19.869280  256790 pod_ready.go:86] duration metric: took 4.193402ms for pod "coredns-66bc5c9577-lcrl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.871213  256790 pod_ready.go:83] waiting for pod "etcd-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.874764  256790 pod_ready.go:94] pod "etcd-no-preload-182765" is "Ready"
	I1124 03:14:19.874797  256790 pod_ready.go:86] duration metric: took 3.566214ms for pod "etcd-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.876539  256790 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.880318  256790 pod_ready.go:94] pod "kube-apiserver-no-preload-182765" is "Ready"
	I1124 03:14:19.880345  256790 pod_ready.go:86] duration metric: took 3.788255ms for pod "kube-apiserver-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:19.882349  256790 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:20.264978  256790 pod_ready.go:94] pod "kube-controller-manager-no-preload-182765" is "Ready"
	I1124 03:14:20.265001  256790 pod_ready.go:86] duration metric: took 382.630322ms for pod "kube-controller-manager-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:20.466538  256790 pod_ready.go:83] waiting for pod "kube-proxy-fx42v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:20.865522  256790 pod_ready.go:94] pod "kube-proxy-fx42v" is "Ready"
	I1124 03:14:20.865548  256790 pod_ready.go:86] duration metric: took 398.983015ms for pod "kube-proxy-fx42v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:21.065507  256790 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:21.465719  256790 pod_ready.go:94] pod "kube-scheduler-no-preload-182765" is "Ready"
	I1124 03:14:21.465743  256790 pod_ready.go:86] duration metric: took 400.213094ms for pod "kube-scheduler-no-preload-182765" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:21.465755  256790 pod_ready.go:40] duration metric: took 1.604587225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:14:21.507898  256790 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:14:21.510018  256790 out.go:179] * Done! kubectl is now configured to use "no-preload-182765" cluster and "default" namespace by default
	W1124 03:14:20.688174  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:22.688990  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:21.627184  222154 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.06042351s)
	W1124 03:14:21.627222  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 03:14:21.627234  222154 logs.go:123] Gathering logs for kube-apiserver [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f] ...
	I1124 03:14:21.627248  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:21.662569  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:14:21.662605  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:21.701452  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:21.701475  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:21.754508  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:21.754538  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:21.790005  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:21.790032  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:21.838678  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:21.838709  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:21.869280  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:14:21.869304  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:14:24.463017  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:14:24.998983  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:39740->192.168.76.2:8443: read: connection reset by peer
	I1124 03:14:24.999054  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:14:24.999109  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:14:25.029909  222154 cri.go:89] found id: "e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:25.029932  222154 cri.go:89] found id: "195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:25.029939  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:25.029943  222154 cri.go:89] found id: ""
	I1124 03:14:25.029951  222154 logs.go:282] 3 containers: [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:14:25.030015  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.034874  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.038716  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.042498  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:14:25.042559  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:14:25.070117  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:25.070196  222154 cri.go:89] found id: ""
	I1124 03:14:25.070218  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:14:25.070287  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.074335  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:14:25.074400  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:14:25.101919  222154 cri.go:89] found id: ""
	I1124 03:14:25.101945  222154 logs.go:282] 0 containers: []
	W1124 03:14:25.101953  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:14:25.101959  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:14:25.102003  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:14:25.128285  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:25.128306  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:25.128311  222154 cri.go:89] found id: ""
	I1124 03:14:25.128318  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:14:25.128361  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.132609  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.136499  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:14:25.136557  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:14:25.161931  222154 cri.go:89] found id: ""
	I1124 03:14:25.161951  222154 logs.go:282] 0 containers: []
	W1124 03:14:25.161959  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:14:25.161969  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:14:25.162022  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:14:25.189937  222154 cri.go:89] found id: "e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b"
	I1124 03:14:25.189956  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:25.189962  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:25.189966  222154 cri.go:89] found id: ""
	I1124 03:14:25.189976  222154 logs.go:282] 3 containers: [e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:14:25.190028  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.194426  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.198094  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:25.201725  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:14:25.201766  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:14:25.227045  222154 cri.go:89] found id: ""
	I1124 03:14:25.227065  222154 logs.go:282] 0 containers: []
	W1124 03:14:25.227071  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:14:25.227077  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:14:25.227119  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:14:25.254768  222154 cri.go:89] found id: ""
	I1124 03:14:25.254825  222154 logs.go:282] 0 containers: []
	W1124 03:14:25.254836  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:14:25.254848  222154 logs.go:123] Gathering logs for kube-apiserver [195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e] ...
	I1124 03:14:25.254862  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 195408c9d902bb128bbb74dcd532f5fb0db48449a316077110866bd365d5883e"
	I1124 03:14:25.289249  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:25.289273  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:25.320254  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:25.320278  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:25.354084  222154 logs.go:123] Gathering logs for kube-controller-manager [e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b] ...
	I1124 03:14:25.354120  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b"
	I1124 03:14:25.382494  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:25.382525  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:25.409640  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:25.409668  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:25.441589  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:14:25.441616  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:14:25.526132  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:14:25.526164  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:14:25.541025  222154 logs.go:123] Gathering logs for kube-apiserver [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f] ...
	I1124 03:14:25.541050  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:25.574168  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:14:25.574196  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:25.607276  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:25.607300  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:25.659107  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:25.659139  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:25.695245  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:25.695274  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:25.743387  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:14:25.743415  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:14:25.800879  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1124 03:14:25.189421  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:27.688601  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	W1124 03:14:29.688767  261872 pod_ready.go:104] pod "coredns-5dd5756b68-gfsqm" is not "Ready", error: <nil>
	I1124 03:14:28.301834  222154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:14:28.302307  222154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 03:14:28.302363  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 03:14:28.302423  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 03:14:28.331161  222154 cri.go:89] found id: "e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:28.331179  222154 cri.go:89] found id: "446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:28.331183  222154 cri.go:89] found id: ""
	I1124 03:14:28.331190  222154 logs.go:282] 2 containers: [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304]
	I1124 03:14:28.331234  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.335257  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.338851  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 03:14:28.338906  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 03:14:28.365611  222154 cri.go:89] found id: "7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:28.365628  222154 cri.go:89] found id: ""
	I1124 03:14:28.365635  222154 logs.go:282] 1 containers: [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25]
	I1124 03:14:28.365681  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.369585  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 03:14:28.369637  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 03:14:28.395430  222154 cri.go:89] found id: ""
	I1124 03:14:28.395453  222154 logs.go:282] 0 containers: []
	W1124 03:14:28.395465  222154 logs.go:284] No container was found matching "coredns"
	I1124 03:14:28.395474  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 03:14:28.395539  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 03:14:28.422428  222154 cri.go:89] found id: "6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:28.422451  222154 cri.go:89] found id: "e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:28.422459  222154 cri.go:89] found id: ""
	I1124 03:14:28.422468  222154 logs.go:282] 2 containers: [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f]
	I1124 03:14:28.422524  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.426610  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.430815  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 03:14:28.430878  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 03:14:28.457418  222154 cri.go:89] found id: ""
	I1124 03:14:28.457445  222154 logs.go:282] 0 containers: []
	W1124 03:14:28.457453  222154 logs.go:284] No container was found matching "kube-proxy"
	I1124 03:14:28.457459  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 03:14:28.457523  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 03:14:28.483306  222154 cri.go:89] found id: "e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b"
	I1124 03:14:28.483327  222154 cri.go:89] found id: "7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:28.483333  222154 cri.go:89] found id: "c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:28.483337  222154 cri.go:89] found id: ""
	I1124 03:14:28.483346  222154 logs.go:282] 3 containers: [e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8]
	I1124 03:14:28.483402  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.487568  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.491362  222154 ssh_runner.go:195] Run: which crictl
	I1124 03:14:28.495112  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 03:14:28.495167  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 03:14:28.520499  222154 cri.go:89] found id: ""
	I1124 03:14:28.520517  222154 logs.go:282] 0 containers: []
	W1124 03:14:28.520524  222154 logs.go:284] No container was found matching "kindnet"
	I1124 03:14:28.520530  222154 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 03:14:28.520574  222154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 03:14:28.547257  222154 cri.go:89] found id: ""
	I1124 03:14:28.547284  222154 logs.go:282] 0 containers: []
	W1124 03:14:28.547297  222154 logs.go:284] No container was found matching "storage-provisioner"
	I1124 03:14:28.547309  222154 logs.go:123] Gathering logs for kube-controller-manager [7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79] ...
	I1124 03:14:28.547324  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7897bc37a09306227b85dd03b05ffcaacc717ed0a9d4c86df1be0813a1e55d79"
	I1124 03:14:28.574834  222154 logs.go:123] Gathering logs for containerd ...
	I1124 03:14:28.574857  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 03:14:28.622416  222154 logs.go:123] Gathering logs for container status ...
	I1124 03:14:28.622444  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 03:14:28.653664  222154 logs.go:123] Gathering logs for kubelet ...
	I1124 03:14:28.653697  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 03:14:28.749298  222154 logs.go:123] Gathering logs for dmesg ...
	I1124 03:14:28.749329  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 03:14:28.763314  222154 logs.go:123] Gathering logs for describe nodes ...
	I1124 03:14:28.763341  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 03:14:28.821048  222154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 03:14:28.821064  222154 logs.go:123] Gathering logs for kube-apiserver [e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f] ...
	I1124 03:14:28.821075  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e26ef11604ca05706a60058a9558dc08457b00a46fde13420745ddefc95a9e5f"
	I1124 03:14:28.854028  222154 logs.go:123] Gathering logs for etcd [7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25] ...
	I1124 03:14:28.854052  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc805cf920b9745091a7b3d3fbdc692cc1cabe11ba46a77f4446cbf972b3f25"
	I1124 03:14:28.886240  222154 logs.go:123] Gathering logs for kube-controller-manager [e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b] ...
	I1124 03:14:28.886270  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e84836a6835498b74754bdb876a14b2ca74b74b9929fbf01f31d142c9c66dd6b"
	I1124 03:14:28.914992  222154 logs.go:123] Gathering logs for kube-controller-manager [c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8] ...
	I1124 03:14:28.915020  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c20c307edc08ad2b0e641c2741ef6bd1b00e6ca37a8373589a23c9f8b4a2e0f8"
	I1124 03:14:28.951240  222154 logs.go:123] Gathering logs for kube-apiserver [446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304] ...
	I1124 03:14:28.951269  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 446002bc916f18e1cc65fcb0bf266cc08b1f6eac40114d7008be0993734ca304"
	I1124 03:14:28.984120  222154 logs.go:123] Gathering logs for kube-scheduler [6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5] ...
	I1124 03:14:28.984149  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e22771b7ed1d8cf4dc0000cb8122afe0c3ac70a4e355bfd8527a3a3296dd7c5"
	I1124 03:14:29.038147  222154 logs.go:123] Gathering logs for kube-scheduler [e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f] ...
	I1124 03:14:29.038180  222154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e92f21b67c7bcdca018723418c795188645ce49ea7ade499f17e6ab5c966f63f"
	I1124 03:14:31.689259  261872 pod_ready.go:94] pod "coredns-5dd5756b68-gfsqm" is "Ready"
	I1124 03:14:31.689293  261872 pod_ready.go:86] duration metric: took 36.006314318s for pod "coredns-5dd5756b68-gfsqm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:31.692608  261872 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-838815" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:31.697142  261872 pod_ready.go:94] pod "etcd-old-k8s-version-838815" is "Ready"
	I1124 03:14:31.697172  261872 pod_ready.go:86] duration metric: took 4.528886ms for pod "etcd-old-k8s-version-838815" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:31.700674  261872 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-838815" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:31.706373  261872 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-838815" is "Ready"
	I1124 03:14:31.706396  261872 pod_ready.go:86] duration metric: took 5.69449ms for pod "kube-apiserver-old-k8s-version-838815" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:31.709795  261872 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-838815" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:31.887745  261872 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-838815" is "Ready"
	I1124 03:14:31.887770  261872 pod_ready.go:86] duration metric: took 177.952147ms for pod "kube-controller-manager-old-k8s-version-838815" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:32.088849  261872 pod_ready.go:83] waiting for pod "kube-proxy-cz68g" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:32.487428  261872 pod_ready.go:94] pod "kube-proxy-cz68g" is "Ready"
	I1124 03:14:32.487452  261872 pod_ready.go:86] duration metric: took 398.574138ms for pod "kube-proxy-cz68g" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:32.687983  261872 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-838815" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:33.087127  261872 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-838815" is "Ready"
	I1124 03:14:33.087154  261872 pod_ready.go:86] duration metric: took 399.141721ms for pod "kube-scheduler-old-k8s-version-838815" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:33.087171  261872 pod_ready.go:40] duration metric: took 37.410240811s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:14:33.140881  261872 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:14:33.142498  261872 out.go:203] 
	W1124 03:14:33.143563  261872 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:14:33.144665  261872 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:14:33.145816  261872 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-838815" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0e24698e04c6c       56cc512116c8f       9 seconds ago       Running             busybox                   0                   d498c4db444ad       busybox                                     default
	a907bd80f2cda       52546a367cc9e       15 seconds ago      Running             coredns                   0                   913067ccf951b       coredns-66bc5c9577-lcrl8                    kube-system
	761d04b7a866b       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   d05b913240157       storage-provisioner                         kube-system
	4e21fd2b52dea       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   8277e8f2a73fb       kindnet-ncvw4                               kube-system
	6633b13393dad       fc25172553d79       29 seconds ago      Running             kube-proxy                0                   fcfd195e317a7       kube-proxy-fx42v                            kube-system
	0e17455baa5e8       5f1f5298c888d       40 seconds ago      Running             etcd                      0                   f00910ba25c59       etcd-no-preload-182765                      kube-system
	81f1b5b22bae8       c3994bc696102       40 seconds ago      Running             kube-apiserver            0                   b9e8ac9695fe9       kube-apiserver-no-preload-182765            kube-system
	3ec30b5cb1d0c       7dd6aaa1717ab       40 seconds ago      Running             kube-scheduler            0                   79973ae97ccae       kube-scheduler-no-preload-182765            kube-system
	89d340829b448       c80c8dbafe7dd       40 seconds ago      Running             kube-controller-manager   0                   d7070c75471eb       kube-controller-manager-no-preload-182765   kube-system
	
	
	==> containerd <==
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.309535111Z" level=info msg="CreateContainer within sandbox \"913067ccf951b9c142727d3dd7ab2a5a7999ff58eb755c533978949dd8951a76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.309991569Z" level=info msg="StartContainer for \"761d04b7a866bf1345332bb965846b86e4dc5385c0317a0cac573a55b1c77456\""
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.310913936Z" level=info msg="connecting to shim 761d04b7a866bf1345332bb965846b86e4dc5385c0317a0cac573a55b1c77456" address="unix:///run/containerd/s/88cd9c3715439028092fd9e4f0cde5501ec2b76bf1d0b3dfb1b51222af0114f1" protocol=ttrpc version=3
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.317472027Z" level=info msg="Container a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.325045995Z" level=info msg="CreateContainer within sandbox \"913067ccf951b9c142727d3dd7ab2a5a7999ff58eb755c533978949dd8951a76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f\""
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.325551646Z" level=info msg="StartContainer for \"a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f\""
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.326434580Z" level=info msg="connecting to shim a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f" address="unix:///run/containerd/s/a25a611608eb1d7c3e8553bc5734490597eaf3d3bfd095eb02083e82c3aa5de3" protocol=ttrpc version=3
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.362246253Z" level=info msg="StartContainer for \"761d04b7a866bf1345332bb965846b86e4dc5385c0317a0cac573a55b1c77456\" returns successfully"
	Nov 24 03:14:18 no-preload-182765 containerd[663]: time="2025-11-24T03:14:18.373966046Z" level=info msg="StartContainer for \"a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f\" returns successfully"
	Nov 24 03:14:21 no-preload-182765 containerd[663]: time="2025-11-24T03:14:21.994520605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:cf658218-2786-43b2-a609-0e21c6244867,Namespace:default,Attempt:0,}"
	Nov 24 03:14:22 no-preload-182765 containerd[663]: time="2025-11-24T03:14:22.047166418Z" level=info msg="connecting to shim d498c4db444adf927d051b8ca8c71cfee20b5bd91cba471418685a32fed3c98c" address="unix:///run/containerd/s/89f61d7d9a72607b406d9d906e7406c38c166bfe98ccc1a55d850e5de7e78be0" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:14:22 no-preload-182765 containerd[663]: time="2025-11-24T03:14:22.119004404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:cf658218-2786-43b2-a609-0e21c6244867,Namespace:default,Attempt:0,} returns sandbox id \"d498c4db444adf927d051b8ca8c71cfee20b5bd91cba471418685a32fed3c98c\""
	Nov 24 03:14:22 no-preload-182765 containerd[663]: time="2025-11-24T03:14:22.120702528Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.224274811Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.225118452Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.226444032Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.228549959Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.229187796Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.108443078s"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.229265451Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.233347412Z" level=info msg="CreateContainer within sandbox \"d498c4db444adf927d051b8ca8c71cfee20b5bd91cba471418685a32fed3c98c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.241737160Z" level=info msg="Container 0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.247912281Z" level=info msg="CreateContainer within sandbox \"d498c4db444adf927d051b8ca8c71cfee20b5bd91cba471418685a32fed3c98c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da\""
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.248356223Z" level=info msg="StartContainer for \"0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da\""
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.249308762Z" level=info msg="connecting to shim 0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da" address="unix:///run/containerd/s/89f61d7d9a72607b406d9d906e7406c38c166bfe98ccc1a55d850e5de7e78be0" protocol=ttrpc version=3
	Nov 24 03:14:24 no-preload-182765 containerd[663]: time="2025-11-24T03:14:24.298444899Z" level=info msg="StartContainer for \"0e24698e04c6c2e0de3138224501884475e3eb7ca71de01b3e3d85f72d5a90da\" returns successfully"
	
	
	==> coredns [a907bd80f2cda76ad20df20a88f4a975ea9e29e53628c2c4358d93332c4ea36f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51020 - 47503 "HINFO IN 6157636609081595951.3166601699698008917. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.071843124s
	
	
	==> describe nodes <==
	Name:               no-preload-182765
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-182765
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-182765
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_13_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:13:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-182765
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:14:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:14:28 +0000   Mon, 24 Nov 2025 03:13:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:14:28 +0000   Mon, 24 Nov 2025 03:13:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:14:28 +0000   Mon, 24 Nov 2025 03:13:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:14:28 +0000   Mon, 24 Nov 2025 03:14:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-182765
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                dfa5c123-4c4a-4093-8de8-3ab7053a4f09
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-lcrl8                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-no-preload-182765                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-ncvw4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-182765             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-182765    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-fx42v                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-182765             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  35s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s   kubelet          Node no-preload-182765 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s   kubelet          Node no-preload-182765 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s   kubelet          Node no-preload-182765 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node no-preload-182765 event: Registered Node no-preload-182765 in Controller
	  Normal  NodeReady                16s   kubelet          Node no-preload-182765 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [0e17455baa5e87392cbabd7c87243a3cdd8cae150abbf559b91ccdca7581766e] <==
	{"level":"warn","ts":"2025-11-24T03:13:54.780389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.797077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.807702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.821947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.826679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.839623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.845556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.855759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.866599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.874643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.884699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.893994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.909754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.916894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.923620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.934319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.945705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.966308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.981854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.990043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:54.996263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:55.013830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:55.023096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:55.031653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:55.089632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:14:33 up 56 min,  0 user,  load average: 2.02, 2.68, 1.91
	Linux no-preload-182765 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e21fd2b52dea181df0dd70ffe9e802ac15de322719f3e7f928b0dbf01549b41] <==
	I1124 03:14:07.433129       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:14:07.433402       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:14:07.433558       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:14:07.433574       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:14:07.433592       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:14:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:14:07.638356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:14:07.638399       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:14:07.638414       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:14:07.638545       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:14:08.038702       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:14:08.038726       1 metrics.go:72] Registering metrics
	I1124 03:14:08.038805       1 controller.go:711] "Syncing nftables rules"
	I1124 03:14:17.644323       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:14:17.644389       1 main.go:301] handling current node
	I1124 03:14:27.639352       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:14:27.639386       1 main.go:301] handling current node
	
	
	==> kube-apiserver [81f1b5b22bae8268c7c78bb74e6e2397a13fc858cb1c682e7bbefe963a285b5b] <==
	I1124 03:13:55.697109       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1124 03:13:55.702540       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:13:55.702706       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:13:55.710749       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:13:55.710889       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:13:55.733250       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:13:55.736273       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:13:56.601251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:13:56.604941       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:13:56.604960       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:13:57.082017       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:13:57.116868       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:13:57.195452       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:13:57.201177       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:13:57.202093       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:13:57.205719       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:13:57.637522       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:13:58.330350       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:13:58.338938       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:13:58.344954       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:14:03.290830       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:14:03.341569       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:14:03.346323       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:14:03.440937       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 03:14:30.790598       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:54310: use of closed network connection
	
	
	==> kube-controller-manager [89d340829b44812b31018d917cbbe98a95714f81eba44bfe7a7308537f360085] <==
	I1124 03:14:02.637248       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 03:14:02.637286       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:14:02.637345       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:14:02.637398       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:14:02.637399       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 03:14:02.637428       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:14:02.637402       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 03:14:02.637496       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-182765"
	I1124 03:14:02.637541       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:14:02.637543       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 03:14:02.637429       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:14:02.637838       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:14:02.637942       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:14:02.637970       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:14:02.638030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:14:02.638148       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:14:02.638287       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:14:02.638335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:14:02.640420       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:14:02.642208       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:14:02.642276       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:14:02.649384       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:14:02.650529       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 03:14:02.660456       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:14:22.640392       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6633b13393dad2206e1ce736b781156f8e7c78d55887d396bc37287b6aaeb952] <==
	I1124 03:14:04.211478       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:14:04.276845       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:14:04.377744       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:14:04.377824       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:14:04.377928       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:14:04.400162       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:14:04.400217       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:14:04.405337       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:14:04.406059       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:14:04.406134       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:14:04.408594       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:14:04.408607       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:14:04.408613       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:14:04.408618       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:14:04.408598       1 config.go:200] "Starting service config controller"
	I1124 03:14:04.408645       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:14:04.408655       1 config.go:309] "Starting node config controller"
	I1124 03:14:04.408767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:14:04.408833       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:14:04.509279       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:14:04.509292       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:14:04.509315       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ec30b5cb1d0c7532ea213820c5e07c77941ee5e43af8e7204cb7bf2fa9f092c] <==
	I1124 03:13:55.661447       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 03:13:55.670804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:13:55.671548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:13:55.671548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:13:55.671627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:13:55.671634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:13:55.671727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:13:55.672335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:13:55.672761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:13:55.672922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:13:55.673015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:13:55.673052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:13:55.673992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:13:55.673991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:13:55.674681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:13:55.674683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:13:55.674830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:13:56.583081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:13:56.598203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:13:56.599124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:13:56.646890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:13:56.813350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:13:56.815310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:13:56.963827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 03:13:59.960968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:13:59 no-preload-182765 kubelet[2164]: I1124 03:13:59.220367    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-182765" podStartSLOduration=1.22034543 podStartE2EDuration="1.22034543s" podCreationTimestamp="2025-11-24 03:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:59.207330931 +0000 UTC m=+1.126221766" watchObservedRunningTime="2025-11-24 03:13:59.22034543 +0000 UTC m=+1.139236255"
	Nov 24 03:13:59 no-preload-182765 kubelet[2164]: I1124 03:13:59.229558    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-182765" podStartSLOduration=1.229535176 podStartE2EDuration="1.229535176s" podCreationTimestamp="2025-11-24 03:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:59.22054426 +0000 UTC m=+1.139435091" watchObservedRunningTime="2025-11-24 03:13:59.229535176 +0000 UTC m=+1.148426009"
	Nov 24 03:13:59 no-preload-182765 kubelet[2164]: I1124 03:13:59.241368    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-182765" podStartSLOduration=1.241345133 podStartE2EDuration="1.241345133s" podCreationTimestamp="2025-11-24 03:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:59.230192246 +0000 UTC m=+1.149083079" watchObservedRunningTime="2025-11-24 03:13:59.241345133 +0000 UTC m=+1.160235965"
	Nov 24 03:13:59 no-preload-182765 kubelet[2164]: I1124 03:13:59.250144    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-182765" podStartSLOduration=1.250126869 podStartE2EDuration="1.250126869s" podCreationTimestamp="2025-11-24 03:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:13:59.241542645 +0000 UTC m=+1.160433468" watchObservedRunningTime="2025-11-24 03:13:59.250126869 +0000 UTC m=+1.169017700"
	Nov 24 03:14:02 no-preload-182765 kubelet[2164]: I1124 03:14:02.703848    2164 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:14:02 no-preload-182765 kubelet[2164]: I1124 03:14:02.704454    2164 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486031    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8c52d6-d4fd-4be2-8246-f96d95997a62-lib-modules\") pod \"kube-proxy-fx42v\" (UID: \"4c8c52d6-d4fd-4be2-8246-f96d95997a62\") " pod="kube-system/kube-proxy-fx42v"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486078    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhkp\" (UniqueName: \"kubernetes.io/projected/4c8c52d6-d4fd-4be2-8246-f96d95997a62-kube-api-access-kkhkp\") pod \"kube-proxy-fx42v\" (UID: \"4c8c52d6-d4fd-4be2-8246-f96d95997a62\") " pod="kube-system/kube-proxy-fx42v"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486103    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d2a43f2-69e3-4768-8e15-39fbe53d92f9-lib-modules\") pod \"kindnet-ncvw4\" (UID: \"6d2a43f2-69e3-4768-8e15-39fbe53d92f9\") " pod="kube-system/kindnet-ncvw4"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486169    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8c52d6-d4fd-4be2-8246-f96d95997a62-xtables-lock\") pod \"kube-proxy-fx42v\" (UID: \"4c8c52d6-d4fd-4be2-8246-f96d95997a62\") " pod="kube-system/kube-proxy-fx42v"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486203    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c8c52d6-d4fd-4be2-8246-f96d95997a62-kube-proxy\") pod \"kube-proxy-fx42v\" (UID: \"4c8c52d6-d4fd-4be2-8246-f96d95997a62\") " pod="kube-system/kube-proxy-fx42v"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486226    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6d2a43f2-69e3-4768-8e15-39fbe53d92f9-cni-cfg\") pod \"kindnet-ncvw4\" (UID: \"6d2a43f2-69e3-4768-8e15-39fbe53d92f9\") " pod="kube-system/kindnet-ncvw4"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486256    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d2a43f2-69e3-4768-8e15-39fbe53d92f9-xtables-lock\") pod \"kindnet-ncvw4\" (UID: \"6d2a43f2-69e3-4768-8e15-39fbe53d92f9\") " pod="kube-system/kindnet-ncvw4"
	Nov 24 03:14:03 no-preload-182765 kubelet[2164]: I1124 03:14:03.486283    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv66f\" (UniqueName: \"kubernetes.io/projected/6d2a43f2-69e3-4768-8e15-39fbe53d92f9-kube-api-access-kv66f\") pod \"kindnet-ncvw4\" (UID: \"6d2a43f2-69e3-4768-8e15-39fbe53d92f9\") " pod="kube-system/kindnet-ncvw4"
	Nov 24 03:14:04 no-preload-182765 kubelet[2164]: I1124 03:14:04.209871    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fx42v" podStartSLOduration=1.209850023 podStartE2EDuration="1.209850023s" podCreationTimestamp="2025-11-24 03:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:14:04.209837062 +0000 UTC m=+6.128727894" watchObservedRunningTime="2025-11-24 03:14:04.209850023 +0000 UTC m=+6.128740856"
	Nov 24 03:14:08 no-preload-182765 kubelet[2164]: I1124 03:14:08.223011    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ncvw4" podStartSLOduration=2.417680204 podStartE2EDuration="5.222986026s" podCreationTimestamp="2025-11-24 03:14:03 +0000 UTC" firstStartedPulling="2025-11-24 03:14:04.336369549 +0000 UTC m=+6.255260360" lastFinishedPulling="2025-11-24 03:14:07.141675371 +0000 UTC m=+9.060566182" observedRunningTime="2025-11-24 03:14:08.222733252 +0000 UTC m=+10.141624086" watchObservedRunningTime="2025-11-24 03:14:08.222986026 +0000 UTC m=+10.141876858"
	Nov 24 03:14:17 no-preload-182765 kubelet[2164]: I1124 03:14:17.737616    2164 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:14:18 no-preload-182765 kubelet[2164]: I1124 03:14:18.002308    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4km\" (UniqueName: \"kubernetes.io/projected/3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7-kube-api-access-9f4km\") pod \"coredns-66bc5c9577-lcrl8\" (UID: \"3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7\") " pod="kube-system/coredns-66bc5c9577-lcrl8"
	Nov 24 03:14:18 no-preload-182765 kubelet[2164]: I1124 03:14:18.002375    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1-tmp\") pod \"storage-provisioner\" (UID: \"271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1\") " pod="kube-system/storage-provisioner"
	Nov 24 03:14:18 no-preload-182765 kubelet[2164]: I1124 03:14:18.002464    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7-config-volume\") pod \"coredns-66bc5c9577-lcrl8\" (UID: \"3bcf6296-f9cf-4d6b-aa33-bec8258dc1e7\") " pod="kube-system/coredns-66bc5c9577-lcrl8"
	Nov 24 03:14:18 no-preload-182765 kubelet[2164]: I1124 03:14:18.002501    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw8fd\" (UniqueName: \"kubernetes.io/projected/271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1-kube-api-access-xw8fd\") pod \"storage-provisioner\" (UID: \"271c17f3-f4c2-43f3-a5bd-3e092e4b0cd1\") " pod="kube-system/storage-provisioner"
	Nov 24 03:14:19 no-preload-182765 kubelet[2164]: I1124 03:14:19.246396    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lcrl8" podStartSLOduration=16.246372758 podStartE2EDuration="16.246372758s" podCreationTimestamp="2025-11-24 03:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:14:19.246339636 +0000 UTC m=+21.165230468" watchObservedRunningTime="2025-11-24 03:14:19.246372758 +0000 UTC m=+21.165263590"
	Nov 24 03:14:19 no-preload-182765 kubelet[2164]: I1124 03:14:19.267821    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.267667894 podStartE2EDuration="15.267667894s" podCreationTimestamp="2025-11-24 03:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:14:19.266421815 +0000 UTC m=+21.185312651" watchObservedRunningTime="2025-11-24 03:14:19.267667894 +0000 UTC m=+21.186558717"
	Nov 24 03:14:21 no-preload-182765 kubelet[2164]: I1124 03:14:21.723163    2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrk5\" (UniqueName: \"kubernetes.io/projected/cf658218-2786-43b2-a609-0e21c6244867-kube-api-access-mgrk5\") pod \"busybox\" (UID: \"cf658218-2786-43b2-a609-0e21c6244867\") " pod="default/busybox"
	Nov 24 03:14:25 no-preload-182765 kubelet[2164]: I1124 03:14:25.263717    2164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.153949098 podStartE2EDuration="4.263693817s" podCreationTimestamp="2025-11-24 03:14:21 +0000 UTC" firstStartedPulling="2025-11-24 03:14:22.120371533 +0000 UTC m=+24.039262350" lastFinishedPulling="2025-11-24 03:14:24.230116258 +0000 UTC m=+26.149007069" observedRunningTime="2025-11-24 03:14:25.263594247 +0000 UTC m=+27.182485067" watchObservedRunningTime="2025-11-24 03:14:25.263693817 +0000 UTC m=+27.182584649"
	
	
	==> storage-provisioner [761d04b7a866bf1345332bb965846b86e4dc5385c0317a0cac573a55b1c77456] <==
	I1124 03:14:18.372229       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:14:18.381064       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:14:18.381112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:14:18.382889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.388152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:14:18.388341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:14:18.388482       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6db7df4-b085-465b-be4e-b02a26c1b5f7", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-182765_f7a9146e-611c-4253-9310-4f29c0034e99 became leader
	I1124 03:14:18.388520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-182765_f7a9146e-611c-4253-9310-4f29c0034e99!
	W1124 03:14:18.390559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.393230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:14:18.488826       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-182765_f7a9146e-611c-4253-9310-4f29c0034e99!
	W1124 03:14:20.396070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:20.401005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:22.404078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:22.408760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:24.411757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:24.416610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:26.419866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:26.423831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:28.426840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:28.430746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:30.433252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:30.436939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:32.440497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:32.444492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-182765 -n no-preload-182765
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-182765 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (13.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-427637 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [218931ee-0865-4000-b423-6af3bc31f260] Pending
helpers_test.go:352: "busybox" [218931ee-0865-4000-b423-6af3bc31f260] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [218931ee-0865-4000-b423-6af3bc31f260] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004316411s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-427637 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-427637
helpers_test.go:243: (dbg) docker inspect embed-certs-427637:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4",
	        "Created": "2025-11-24T03:14:56.013029284Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276405,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:14:56.063628489Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4/hosts",
	        "LogPath": "/var/lib/docker/containers/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4-json.log",
	        "Name": "/embed-certs-427637",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-427637:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-427637",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4",
	                "LowerDir": "/var/lib/docker/overlay2/50396b832abdd5e1ae4a1e8d43d84640d1e73103b450beb3bff6c75ff8be3d1e-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50396b832abdd5e1ae4a1e8d43d84640d1e73103b450beb3bff6c75ff8be3d1e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50396b832abdd5e1ae4a1e8d43d84640d1e73103b450beb3bff6c75ff8be3d1e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50396b832abdd5e1ae4a1e8d43d84640d1e73103b450beb3bff6c75ff8be3d1e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-427637",
	                "Source": "/var/lib/docker/volumes/embed-certs-427637/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-427637",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-427637",
	                "name.minikube.sigs.k8s.io": "embed-certs-427637",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b6412f4b84f77c8156799fd10fa9507d23048da69d6fe3d69bc676ef6eaaf458",
	            "SandboxKey": "/var/run/docker/netns/b6412f4b84f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-427637": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da14df84615929cdce81de230728b5b4ded52dafce00fc44a291a9d383f39244",
	                    "EndpointID": "32a051c6ae17ac1398272ee18bae359ff004687b6c630ca2b11d2f89e64121c8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "52:6a:ee:74:71:df",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-427637",
	                        "1966880807ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-427637 -n embed-certs-427637
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-427637 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-427637 logs -n 25: (1.149181849s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ stop    │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p NoKubernetes-502612 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-502612          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-838815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ stop    │ -p old-k8s-version-838815 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-838815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable metrics-server -p no-preload-182765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ stop    │ -p no-preload-182765 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ image   │ old-k8s-version-838815 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ pause   │ -p old-k8s-version-838815 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ unpause │ -p old-k8s-version-838815 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ delete  │ -p old-k8s-version-838815                                                                                                                                                                                                                           │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable dashboard -p no-preload-182765 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ start   │ -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p old-k8s-version-838815                                                                                                                                                                                                                           │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ start   │ -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p cert-expiration-004045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-004045       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p cert-expiration-004045                                                                                                                                                                                                                           │ cert-expiration-004045       │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p disable-driver-mounts-602172                                                                                                                                                                                                                     │ disable-driver-mounts-602172 │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p default-k8s-diff-port-983163 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-983163 │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:15:37
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:15:37.160310  287103 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:15:37.160589  287103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:15:37.160599  287103 out.go:374] Setting ErrFile to fd 2...
	I1124 03:15:37.160606  287103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:15:37.160898  287103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:15:37.161474  287103 out.go:368] Setting JSON to false
	I1124 03:15:37.163005  287103 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3480,"bootTime":1763950657,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:15:37.163060  287103 start.go:143] virtualization: kvm guest
	I1124 03:15:37.165623  287103 out.go:179] * [kubernetes-upgrade-093930] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:15:37.167612  287103 notify.go:221] Checking for updates...
	I1124 03:15:37.167737  287103 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:15:37.169109  287103 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:15:37.170650  287103 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:15:36.878566  280966 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:15:36.878588  280966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:15:36.878645  280966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-983163
	I1124 03:15:36.880914  280966 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-983163"
	I1124 03:15:36.880959  280966 host.go:66] Checking if "default-k8s-diff-port-983163" exists ...
	I1124 03:15:36.881541  280966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-983163 --format={{.State.Status}}
	I1124 03:15:36.916990  280966 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:15:36.917015  280966 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:15:36.917078  280966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-983163
	I1124 03:15:36.921019  280966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/default-k8s-diff-port-983163/id_rsa Username:docker}
	I1124 03:15:36.948440  280966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/default-k8s-diff-port-983163/id_rsa Username:docker}
	I1124 03:15:36.977842  280966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:15:37.038147  280966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:15:37.058234  280966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:15:37.075171  280966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:15:37.172745  287103 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 03:15:37.174066  287103 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:15:37.175164  287103 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:15:37.177408  287103 config.go:182] Loaded profile config "kubernetes-upgrade-093930": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:15:37.178133  287103 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:15:37.216832  287103 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:15:37.216963  287103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:15:37.284379  287103 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 03:15:37.274460892 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:15:37.284498  287103 docker.go:319] overlay module found
	I1124 03:15:37.286386  287103 out.go:179] * Using the docker driver based on existing profile
	I1124 03:15:37.287589  287103 start.go:309] selected driver: docker
	I1124 03:15:37.287606  287103 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-093930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-093930 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:15:37.287718  287103 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:15:37.288575  287103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:15:37.354517  287103 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 03:15:37.343748004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:15:37.354835  287103 cni.go:84] Creating CNI manager for ""
	I1124 03:15:37.354986  287103 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:15:37.355090  287103 start.go:353] cluster config:
	{Name:kubernetes-upgrade-093930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-093930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:15:37.357500  287103 out.go:179] * Starting "kubernetes-upgrade-093930" primary control-plane node in "kubernetes-upgrade-093930" cluster
	I1124 03:15:37.358742  287103 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:15:37.360259  287103 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:15:37.361460  287103 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:15:37.361493  287103 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 03:15:37.361504  287103 cache.go:65] Caching tarball of preloaded images
	I1124 03:15:37.361566  287103 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:15:37.361625  287103 preload.go:238] Found /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 03:15:37.361639  287103 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:15:37.361814  287103 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/kubernetes-upgrade-093930/config.json ...
	I1124 03:15:37.387309  287103 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:15:37.387336  287103 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:15:37.387356  287103 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:15:37.387403  287103 start.go:360] acquireMachinesLock for kubernetes-upgrade-093930: {Name:mk48d2551c335008e28757aaafc77c2cf50948b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:15:37.387477  287103 start.go:364] duration metric: took 48.902µs to acquireMachinesLock for "kubernetes-upgrade-093930"
	I1124 03:15:37.387502  287103 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:15:37.387513  287103 fix.go:54] fixHost starting: 
	I1124 03:15:37.387800  287103 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-093930 --format={{.State.Status}}
	I1124 03:15:37.410161  287103 fix.go:112] recreateIfNeeded on kubernetes-upgrade-093930: state=Running err=<nil>
	W1124 03:15:37.410193  287103 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:15:37.180044  280966 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 03:15:37.181238  280966 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-983163" to be "Ready" ...
	I1124 03:15:37.415527  280966 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:15:37.413334  287103 out.go:252] * Updating the running docker "kubernetes-upgrade-093930" container ...
	I1124 03:15:37.413379  287103 machine.go:94] provisionDockerMachine start ...
	I1124 03:15:37.413457  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:37.439966  287103 main.go:143] libmachine: Using SSH client type: native
	I1124 03:15:37.440263  287103 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33022 <nil> <nil>}
	I1124 03:15:37.440278  287103 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:15:37.589544  287103 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-093930
	
	I1124 03:15:37.589580  287103 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-093930"
	I1124 03:15:37.589653  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:37.611390  287103 main.go:143] libmachine: Using SSH client type: native
	I1124 03:15:37.611615  287103 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33022 <nil> <nil>}
	I1124 03:15:37.611637  287103 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-093930 && echo "kubernetes-upgrade-093930" | sudo tee /etc/hostname
	I1124 03:15:37.768981  287103 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-093930
	
	I1124 03:15:37.769060  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:37.792648  287103 main.go:143] libmachine: Using SSH client type: native
	I1124 03:15:37.792983  287103 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33022 <nil> <nil>}
	I1124 03:15:37.793028  287103 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-093930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-093930/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-093930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:15:37.940061  287103 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:15:37.940102  287103 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 03:15:37.940129  287103 ubuntu.go:190] setting up certificates
	I1124 03:15:37.940143  287103 provision.go:84] configureAuth start
	I1124 03:15:37.940204  287103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-093930
	I1124 03:15:37.960416  287103 provision.go:143] copyHostCerts
	I1124 03:15:37.960481  287103 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 03:15:37.960497  287103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 03:15:37.960602  287103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 03:15:37.960748  287103 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 03:15:37.960760  287103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 03:15:37.960839  287103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 03:15:37.960950  287103 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 03:15:37.960963  287103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 03:15:37.961025  287103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 03:15:37.961124  287103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-093930 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-093930 localhost minikube]
	I1124 03:15:37.981489  287103 provision.go:177] copyRemoteCerts
	I1124 03:15:37.981537  287103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:15:37.981566  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.002360  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.103204  287103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:15:38.121875  287103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:15:38.139841  287103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1124 03:15:38.162306  287103 provision.go:87] duration metric: took 222.150529ms to configureAuth
	I1124 03:15:38.162333  287103 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:15:38.162521  287103 config.go:182] Loaded profile config "kubernetes-upgrade-093930": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:15:38.162535  287103 machine.go:97] duration metric: took 749.149471ms to provisionDockerMachine
	I1124 03:15:38.162543  287103 start.go:293] postStartSetup for "kubernetes-upgrade-093930" (driver="docker")
	I1124 03:15:38.162552  287103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:15:38.162606  287103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:15:38.162644  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.195977  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.299790  287103 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:15:38.303385  287103 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:15:38.303416  287103 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:15:38.303428  287103 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 03:15:38.303480  287103 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 03:15:38.303552  287103 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem -> 84292.pem in /etc/ssl/certs
	I1124 03:15:38.303639  287103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:15:38.311506  287103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:15:38.329605  287103 start.go:296] duration metric: took 167.046667ms for postStartSetup
	I1124 03:15:38.329680  287103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:15:38.329727  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.350055  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.447466  287103 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:15:38.452861  287103 fix.go:56] duration metric: took 1.065341914s for fixHost
	I1124 03:15:38.452889  287103 start.go:83] releasing machines lock for "kubernetes-upgrade-093930", held for 1.065397353s
	I1124 03:15:38.452955  287103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-093930
	I1124 03:15:38.471924  287103 ssh_runner.go:195] Run: cat /version.json
	I1124 03:15:38.471970  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.472025  287103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:15:38.472120  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.493069  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.493568  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.647014  287103 ssh_runner.go:195] Run: systemctl --version
	I1124 03:15:38.653867  287103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:15:38.659747  287103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:15:38.659844  287103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:15:38.668204  287103 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:15:38.668238  287103 start.go:496] detecting cgroup driver to use...
	I1124 03:15:38.668279  287103 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:15:38.668318  287103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:15:38.683442  287103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:15:38.697554  287103 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:15:38.697622  287103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:15:38.713677  287103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:15:38.726920  287103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:15:38.835190  287103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:15:38.942010  287103 docker.go:234] disabling docker service ...
	I1124 03:15:38.942063  287103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:15:38.957492  287103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:15:38.969978  287103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:15:39.073293  287103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:15:39.179565  287103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:15:39.193754  287103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:15:39.208498  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:15:39.217460  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:15:39.226374  287103 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 03:15:39.226436  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 03:15:39.235703  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:15:39.245586  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:15:39.255036  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:15:39.264469  287103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:15:39.272678  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:15:39.281717  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:15:39.290636  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:15:39.299633  287103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:15:39.306955  287103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:15:39.314452  287103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:15:39.421007  287103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:15:39.563483  287103 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:15:39.563563  287103 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:15:39.567752  287103 start.go:564] Will wait 60s for crictl version
	I1124 03:15:39.567824  287103 ssh_runner.go:195] Run: which crictl
	I1124 03:15:39.571455  287103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:15:39.597181  287103 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:15:39.597266  287103 ssh_runner.go:195] Run: containerd --version
	I1124 03:15:39.618448  287103 ssh_runner.go:195] Run: containerd --version
	I1124 03:15:39.642351  287103 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:15:39.643428  287103 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-093930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:15:39.671599  287103 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:15:39.675857  287103 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-093930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-093930 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:15:39.675966  287103 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:15:39.676022  287103 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:15:39.702658  287103 containerd.go:623] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I1124 03:15:39.702721  287103 ssh_runner.go:195] Run: which lz4
	I1124 03:15:39.706795  287103 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 03:15:39.710616  287103 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
	I1124 03:15:39.710635  287103 containerd.go:563] duration metric: took 3.889628ms to copy over tarball
	I1124 03:15:39.710680  287103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 03:15:37.416743  280966 addons.go:530] duration metric: took 568.498016ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:15:37.684712  280966 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-983163" context rescaled to 1 replicas
	W1124 03:15:39.184513  280966 node_ready.go:57] node "default-k8s-diff-port-983163" has "Ready":"False" status (will retry)
	W1124 03:15:41.231214  280966 node_ready.go:57] node "default-k8s-diff-port-983163" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	63d1ed68d8fa7       56cc512116c8f       8 seconds ago       Running             busybox                   0                   f61d3541f9c63       busybox                                      default
	1c5ecefe3510d       52546a367cc9e       13 seconds ago      Running             coredns                   0                   2d9b76a873f45       coredns-66bc5c9577-lwlxk                     kube-system
	e56e76bbfa118       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   c7ac05e4a5431       storage-provisioner                          kube-system
	0c29b1f094f4a       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   07c55aca022d4       kindnet-nbw22                                kube-system
	6ee9232927bad       fc25172553d79       25 seconds ago      Running             kube-proxy                0                   79e01ed043e4a       kube-proxy-nr7h4                             kube-system
	7456a10c919e6       7dd6aaa1717ab       35 seconds ago      Running             kube-scheduler            0                   de2a8e142b3b3       kube-scheduler-embed-certs-427637            kube-system
	4f08f2d505c46       c80c8dbafe7dd       35 seconds ago      Running             kube-controller-manager   0                   2a2ae758c6c56       kube-controller-manager-embed-certs-427637   kube-system
	b86a90195fd1a       c3994bc696102       35 seconds ago      Running             kube-apiserver            0                   c89882ad428b4       kube-apiserver-embed-certs-427637            kube-system
	32fa11b4d353a       5f1f5298c888d       35 seconds ago      Running             etcd                      0                   375f59c4a10c7       etcd-embed-certs-427637                      kube-system
	
	
	==> containerd <==
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.652664959Z" level=info msg="connecting to shim e56e76bbfa118cc06d71064f22f4c4505d29a579e5d600dc5beac2698beb8dd5" address="unix:///run/containerd/s/82130fe4796e7dde376b20730988bc891386a66f7140ea018911bf5c8e0459ad" protocol=ttrpc version=3
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.679335862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lwlxk,Uid:089fe6b1-3d54-44a7-bb14-4d23c7b4b612,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d9b76a873f45989e40afe96cb15d94c3396ad8b3c4c72ac4ed1249d58501dd7\""
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.684415529Z" level=info msg="CreateContainer within sandbox \"2d9b76a873f45989e40afe96cb15d94c3396ad8b3c4c72ac4ed1249d58501dd7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.691674268Z" level=info msg="Container 1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.699492032Z" level=info msg="CreateContainer within sandbox \"2d9b76a873f45989e40afe96cb15d94c3396ad8b3c4c72ac4ed1249d58501dd7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465\""
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.700246169Z" level=info msg="StartContainer for \"1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465\""
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.701533281Z" level=info msg="connecting to shim 1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465" address="unix:///run/containerd/s/1d2d4c547768ada06fd9730869488402420420cde2dab3facfe7510ea12e4a2a" protocol=ttrpc version=3
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.714885308Z" level=info msg="StartContainer for \"e56e76bbfa118cc06d71064f22f4c4505d29a579e5d600dc5beac2698beb8dd5\" returns successfully"
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.767721301Z" level=info msg="StartContainer for \"1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465\" returns successfully"
	Nov 24 03:15:34 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:34.641736245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:218931ee-0865-4000-b423-6af3bc31f260,Namespace:default,Attempt:0,}"
	Nov 24 03:15:34 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:34.699758679Z" level=info msg="connecting to shim f61d3541f9c630709462edbcd6daa9d4bcc8dcb6d6d9ddb8e8e9f090c728af88" address="unix:///run/containerd/s/86965f3576961bc5c5eecd3d71c2ecb146c8349c84e7256e600fc7aca6b072c6" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:15:34 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:34.784737795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:218931ee-0865-4000-b423-6af3bc31f260,Namespace:default,Attempt:0,} returns sandbox id \"f61d3541f9c630709462edbcd6daa9d4bcc8dcb6d6d9ddb8e8e9f090c728af88\""
	Nov 24 03:15:34 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:34.788331416Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.845691174Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.846339608Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.847543074Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.850249865Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.850881683Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.062481771s"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.851285450Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.859905526Z" level=info msg="CreateContainer within sandbox \"f61d3541f9c630709462edbcd6daa9d4bcc8dcb6d6d9ddb8e8e9f090c728af88\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.870043202Z" level=info msg="Container 63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.882058512Z" level=info msg="CreateContainer within sandbox \"f61d3541f9c630709462edbcd6daa9d4bcc8dcb6d6d9ddb8e8e9f090c728af88\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972\""
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.883005733Z" level=info msg="StartContainer for \"63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972\""
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.884110310Z" level=info msg="connecting to shim 63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972" address="unix:///run/containerd/s/86965f3576961bc5c5eecd3d71c2ecb146c8349c84e7256e600fc7aca6b072c6" protocol=ttrpc version=3
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.971436524Z" level=info msg="StartContainer for \"63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972\" returns successfully"
	
	
	==> coredns [1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50585 - 3925 "HINFO IN 536952529675684913.1936608164626354691. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.018924135s
	
	
	==> describe nodes <==
	Name:               embed-certs-427637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-427637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-427637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_15_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:15:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-427637
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:15:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:15:45 +0000   Mon, 24 Nov 2025 03:15:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:15:45 +0000   Mon, 24 Nov 2025 03:15:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:15:45 +0000   Mon, 24 Nov 2025 03:15:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:15:45 +0000   Mon, 24 Nov 2025 03:15:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-427637
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                839ea9e0-fcf6-4e2e-8442-8290dbe40da1
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-lwlxk                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-427637                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-nbw22                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-427637             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-427637    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-nr7h4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-427637             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node embed-certs-427637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node embed-certs-427637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node embed-certs-427637 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node embed-certs-427637 event: Registered Node embed-certs-427637 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-427637 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [32fa11b4d353ac18238716802bf8849023987e1942cfbc93ea1025ed998f28a1] <==
	{"level":"info","ts":"2025-11-24T03:15:12.374418Z","caller":"traceutil/trace.go:172","msg":"trace[433668974] transaction","detail":"{read_only:false; response_revision:20; number_of_response:1; }","duration":"246.863145ms","start":"2025-11-24T03:15:12.127549Z","end":"2025-11-24T03:15:12.374412Z","steps":["trace[433668974] 'process raft request'  (duration: 246.672771ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374463Z","caller":"traceutil/trace.go:172","msg":"trace[137880259] linearizableReadLoop","detail":"{readStateIndex:17; appliedIndex:16; }","duration":"128.511549ms","start":"2025-11-24T03:15:12.245713Z","end":"2025-11-24T03:15:12.374225Z","steps":["trace[137880259] 'read index received'  (duration: 126.979615ms)","trace[137880259] 'applied index is now lower than readState.Index'  (duration: 1.528778ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:12.374526Z","caller":"traceutil/trace.go:172","msg":"trace[704840880] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"251.664331ms","start":"2025-11-24T03:15:12.122855Z","end":"2025-11-24T03:15:12.374519Z","steps":["trace[704840880] 'process raft request'  (duration: 251.389434ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:15:12.374526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.127539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-427637\" limit:1 ","response":"range_response_count:1 size:3566"}
	{"level":"info","ts":"2025-11-24T03:15:12.376023Z","caller":"traceutil/trace.go:172","msg":"trace[201413550] range","detail":"{range_begin:/registry/minions/embed-certs-427637; range_end:; response_count:1; response_revision:23; }","duration":"253.626696ms","start":"2025-11-24T03:15:12.122385Z","end":"2025-11-24T03:15:12.376012Z","steps":["trace[201413550] 'agreement among raft nodes before linearized reading'  (duration: 252.105556ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374544Z","caller":"traceutil/trace.go:172","msg":"trace[292989784] transaction","detail":"{read_only:false; response_revision:15; number_of_response:1; }","duration":"247.47147ms","start":"2025-11-24T03:15:12.127061Z","end":"2025-11-24T03:15:12.374532Z","steps":["trace[292989784] 'process raft request'  (duration: 247.018046ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374554Z","caller":"traceutil/trace.go:172","msg":"trace[339810947] transaction","detail":"{read_only:false; response_revision:17; number_of_response:1; }","duration":"247.200915ms","start":"2025-11-24T03:15:12.127347Z","end":"2025-11-24T03:15:12.374548Z","steps":["trace[339810947] 'process raft request'  (duration: 246.803977ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:15:12.374580Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.927868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-11-24T03:15:12.376610Z","caller":"traceutil/trace.go:172","msg":"trace[87780294] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:23; }","duration":"254.95499ms","start":"2025-11-24T03:15:12.121640Z","end":"2025-11-24T03:15:12.376595Z","steps":["trace[87780294] 'agreement among raft nodes before linearized reading'  (duration: 252.854072ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374606Z","caller":"traceutil/trace.go:172","msg":"trace[1909663951] transaction","detail":"{read_only:false; response_revision:22; number_of_response:1; }","duration":"243.618876ms","start":"2025-11-24T03:15:12.130981Z","end":"2025-11-24T03:15:12.374600Z","steps":["trace[1909663951] 'process raft request'  (duration: 243.306136ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374609Z","caller":"traceutil/trace.go:172","msg":"trace[1041631161] transaction","detail":"{read_only:false; response_revision:16; number_of_response:1; }","duration":"247.516597ms","start":"2025-11-24T03:15:12.127084Z","end":"2025-11-24T03:15:12.374601Z","steps":["trace[1041631161] 'process raft request'  (duration: 247.035488ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374632Z","caller":"traceutil/trace.go:172","msg":"trace[290592294] transaction","detail":"{read_only:false; response_revision:13; number_of_response:1; }","duration":"253.699ms","start":"2025-11-24T03:15:12.120926Z","end":"2025-11-24T03:15:12.374625Z","steps":["trace[290592294] 'process raft request'  (duration: 124.76671ms)","trace[290592294] 'compare'  (duration: 127.530838ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:12.506735Z","caller":"traceutil/trace.go:172","msg":"trace[1107527160] linearizableReadLoop","detail":"{readStateIndex:27; appliedIndex:27; }","duration":"124.406187ms","start":"2025-11-24T03:15:12.382307Z","end":"2025-11-24T03:15:12.506713Z","steps":["trace[1107527160] 'read index received'  (duration: 124.399059ms)","trace[1107527160] 'applied index is now lower than readState.Index'  (duration: 5.866µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:15:12.599172Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"216.83261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T03:15:12.599248Z","caller":"traceutil/trace.go:172","msg":"trace[1136797137] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:23; }","duration":"216.927158ms","start":"2025-11-24T03:15:12.382303Z","end":"2025-11-24T03:15:12.599230Z","steps":["trace[1136797137] 'agreement among raft nodes before linearized reading'  (duration: 124.493407ms)","trace[1136797137] 'range keys from in-memory index tree'  (duration: 92.294868ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:12.599277Z","caller":"traceutil/trace.go:172","msg":"trace[1462202500] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"218.626214ms","start":"2025-11-24T03:15:12.380633Z","end":"2025-11-24T03:15:12.599259Z","steps":["trace[1462202500] 'process raft request'  (duration: 126.098823ms)","trace[1462202500] 'compare'  (duration: 92.396955ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:12.600013Z","caller":"traceutil/trace.go:172","msg":"trace[10143233] transaction","detail":"{read_only:false; response_revision:31; number_of_response:1; }","duration":"217.42856ms","start":"2025-11-24T03:15:12.382573Z","end":"2025-11-24T03:15:12.600001Z","steps":["trace[10143233] 'process raft request'  (duration: 217.404556ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600036Z","caller":"traceutil/trace.go:172","msg":"trace[447826844] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"218.845898ms","start":"2025-11-24T03:15:12.381177Z","end":"2025-11-24T03:15:12.600023Z","steps":["trace[447826844] 'process raft request'  (duration: 218.58446ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600062Z","caller":"traceutil/trace.go:172","msg":"trace[826399874] transaction","detail":"{read_only:false; response_revision:28; number_of_response:1; }","duration":"217.929279ms","start":"2025-11-24T03:15:12.382124Z","end":"2025-11-24T03:15:12.600053Z","steps":["trace[826399874] 'process raft request'  (duration: 217.783284ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600124Z","caller":"traceutil/trace.go:172","msg":"trace[1073308548] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"218.140941ms","start":"2025-11-24T03:15:12.381977Z","end":"2025-11-24T03:15:12.600118Z","steps":["trace[1073308548] 'process raft request'  (duration: 217.866483ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600157Z","caller":"traceutil/trace.go:172","msg":"trace[401622904] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"217.977395ms","start":"2025-11-24T03:15:12.382170Z","end":"2025-11-24T03:15:12.600147Z","steps":["trace[401622904] 'process raft request'  (duration: 217.781248ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600244Z","caller":"traceutil/trace.go:172","msg":"trace[2106743223] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"218.195707ms","start":"2025-11-24T03:15:12.382040Z","end":"2025-11-24T03:15:12.600236Z","steps":["trace[2106743223] 'process raft request'  (duration: 217.83762ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600269Z","caller":"traceutil/trace.go:172","msg":"trace[1245880939] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"218.092999ms","start":"2025-11-24T03:15:12.382168Z","end":"2025-11-24T03:15:12.600261Z","steps":["trace[1245880939] 'process raft request'  (duration: 217.761324ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.836951Z","caller":"traceutil/trace.go:172","msg":"trace[1143838732] transaction","detail":"{read_only:false; response_revision:42; number_of_response:1; }","duration":"169.883367ms","start":"2025-11-24T03:15:12.667035Z","end":"2025-11-24T03:15:12.836918Z","steps":["trace[1143838732] 'process raft request'  (duration: 106.652157ms)","trace[1143838732] 'compare'  (duration: 63.066447ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:41.945725Z","caller":"traceutil/trace.go:172","msg":"trace[1590824425] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"150.142414ms","start":"2025-11-24T03:15:41.795561Z","end":"2025-11-24T03:15:41.945703Z","steps":["trace[1590824425] 'process raft request'  (duration: 149.99967ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:15:45 up 58 min,  0 user,  load average: 3.37, 2.91, 2.04
	Linux embed-certs-427637 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c29b1f094f4a1f822553da904f2d9fd85f07fe1685ade3f85d7a1ad29410529] <==
	I1124 03:15:20.951758       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:15:20.952085       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:15:20.952224       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:15:20.952248       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:15:20.952275       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:15:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:15:21.152447       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:15:21.152474       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:15:21.152482       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:15:21.236870       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:15:21.436956       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:15:21.437004       1 metrics.go:72] Registering metrics
	I1124 03:15:21.437080       1 controller.go:711] "Syncing nftables rules"
	I1124 03:15:31.152967       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:15:31.153063       1 main.go:301] handling current node
	I1124 03:15:41.154897       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:15:41.154966       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b86a90195fd1a09eb58b38f26ad5eff53b8fcae105d54dd47c874e892d0342ff] <==
	I1124 03:15:11.961591       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:15:11.961620       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1124 03:15:12.118705       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:12.120093       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1124 03:15:12.121874       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 03:15:12.122101       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:15:12.377727       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:15:12.378868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:12.864412       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:15:12.878285       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:15:12.878318       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:15:13.431622       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:15:13.485491       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:15:13.570733       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:15:13.578896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 03:15:13.580256       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:15:13.585400       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:15:13.890348       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:15:14.515801       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:15:14.526209       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:15:14.532951       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:15:19.698263       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:19.704967       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:19.794612       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:15:19.892726       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4f08f2d505c46cbd0949c947f86ce23acf6de44a1fbea7f5a8f41784e3d9cee7] <==
	I1124 03:15:18.889943       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 03:15:18.889983       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:15:18.890021       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:15:18.890178       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:15:18.890220       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:15:18.890499       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:15:18.890518       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:15:18.890572       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:15:18.890843       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:15:18.891635       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:15:18.891688       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:15:18.891733       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:15:18.892954       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:15:18.893554       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:15:18.894522       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:15:18.894577       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:15:18.894608       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:15:18.894617       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:15:18.894624       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:15:18.900984       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:15:18.900964       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:15:18.903516       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-427637" podCIDRs=["10.244.0.0/24"]
	I1124 03:15:18.906841       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:15:18.913201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:15:33.843717       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6ee9232927baded5b8c1850deba884ba097eb1113f0945bbee245ce7682d2b44] <==
	I1124 03:15:20.472987       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:15:20.560649       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:15:20.661246       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:15:20.661291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 03:15:20.661418       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:15:20.684552       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:15:20.684601       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:15:20.690600       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:15:20.691118       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:15:20.691148       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:15:20.693840       1 config.go:200] "Starting service config controller"
	I1124 03:15:20.693869       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:15:20.693882       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:15:20.693896       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:15:20.693903       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:15:20.693910       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:15:20.694027       1 config.go:309] "Starting node config controller"
	I1124 03:15:20.694040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:15:20.794049       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:15:20.794065       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:15:20.794090       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:15:20.794102       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [7456a10c919e6bc8e366bd8d2615b02ba388d90acda2ba06151b651e16735227] <==
	E1124 03:15:11.914822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:15:11.914872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:15:11.914887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:15:11.914880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:15:11.914956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:15:11.914958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:15:11.915012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:15:11.915039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:15:11.915060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:15:11.915078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:15:11.915505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:15:11.915647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:15:11.915660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:15:11.915820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:15:12.718271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:15:12.856969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:15:12.899338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:15:12.919152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:15:13.034944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:15:13.037098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:15:13.139901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:15:13.217993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:15:13.241136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:15:13.286901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 03:15:16.211274       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:15:15 embed-certs-427637 kubelet[1445]: I1124 03:15:15.414987    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-427637" podStartSLOduration=1.414962278 podStartE2EDuration="1.414962278s" podCreationTimestamp="2025-11-24 03:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:15.400119477 +0000 UTC m=+1.131469166" watchObservedRunningTime="2025-11-24 03:15:15.414962278 +0000 UTC m=+1.146311966"
	Nov 24 03:15:15 embed-certs-427637 kubelet[1445]: I1124 03:15:15.426287    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-427637" podStartSLOduration=1.4262538949999999 podStartE2EDuration="1.426253895s" podCreationTimestamp="2025-11-24 03:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:15.414952146 +0000 UTC m=+1.146301834" watchObservedRunningTime="2025-11-24 03:15:15.426253895 +0000 UTC m=+1.157603582"
	Nov 24 03:15:15 embed-certs-427637 kubelet[1445]: I1124 03:15:15.439455    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-427637" podStartSLOduration=1.43943516 podStartE2EDuration="1.43943516s" podCreationTimestamp="2025-11-24 03:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:15.42655058 +0000 UTC m=+1.157900271" watchObservedRunningTime="2025-11-24 03:15:15.43943516 +0000 UTC m=+1.170784852"
	Nov 24 03:15:15 embed-certs-427637 kubelet[1445]: I1124 03:15:15.462563    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-427637" podStartSLOduration=1.462537597 podStartE2EDuration="1.462537597s" podCreationTimestamp="2025-11-24 03:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:15.439616935 +0000 UTC m=+1.170966619" watchObservedRunningTime="2025-11-24 03:15:15.462537597 +0000 UTC m=+1.193887276"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.002595    1445 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.003412    1445 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981240    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnk75\" (UniqueName: \"kubernetes.io/projected/672c7a34-edc7-4839-9de2-2321574fadc7-kube-api-access-mnk75\") pod \"kindnet-nbw22\" (UID: \"672c7a34-edc7-4839-9de2-2321574fadc7\") " pod="kube-system/kindnet-nbw22"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981302    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/672c7a34-edc7-4839-9de2-2321574fadc7-xtables-lock\") pod \"kindnet-nbw22\" (UID: \"672c7a34-edc7-4839-9de2-2321574fadc7\") " pod="kube-system/kindnet-nbw22"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981331    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/672c7a34-edc7-4839-9de2-2321574fadc7-lib-modules\") pod \"kindnet-nbw22\" (UID: \"672c7a34-edc7-4839-9de2-2321574fadc7\") " pod="kube-system/kindnet-nbw22"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981354    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c973bc1-1945-4ef2-af14-44451934d69b-kube-proxy\") pod \"kube-proxy-nr7h4\" (UID: \"3c973bc1-1945-4ef2-af14-44451934d69b\") " pod="kube-system/kube-proxy-nr7h4"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981379    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlqt\" (UniqueName: \"kubernetes.io/projected/3c973bc1-1945-4ef2-af14-44451934d69b-kube-api-access-fzlqt\") pod \"kube-proxy-nr7h4\" (UID: \"3c973bc1-1945-4ef2-af14-44451934d69b\") " pod="kube-system/kube-proxy-nr7h4"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981421    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/672c7a34-edc7-4839-9de2-2321574fadc7-cni-cfg\") pod \"kindnet-nbw22\" (UID: \"672c7a34-edc7-4839-9de2-2321574fadc7\") " pod="kube-system/kindnet-nbw22"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981443    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c973bc1-1945-4ef2-af14-44451934d69b-xtables-lock\") pod \"kube-proxy-nr7h4\" (UID: \"3c973bc1-1945-4ef2-af14-44451934d69b\") " pod="kube-system/kube-proxy-nr7h4"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981463    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c973bc1-1945-4ef2-af14-44451934d69b-lib-modules\") pod \"kube-proxy-nr7h4\" (UID: \"3c973bc1-1945-4ef2-af14-44451934d69b\") " pod="kube-system/kube-proxy-nr7h4"
	Nov 24 03:15:21 embed-certs-427637 kubelet[1445]: I1124 03:15:21.410463    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nbw22" podStartSLOduration=2.410441093 podStartE2EDuration="2.410441093s" podCreationTimestamp="2025-11-24 03:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:21.410347659 +0000 UTC m=+7.141697348" watchObservedRunningTime="2025-11-24 03:15:21.410441093 +0000 UTC m=+7.141790780"
	Nov 24 03:15:21 embed-certs-427637 kubelet[1445]: I1124 03:15:21.410589    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nr7h4" podStartSLOduration=2.410580994 podStartE2EDuration="2.410580994s" podCreationTimestamp="2025-11-24 03:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:21.400928525 +0000 UTC m=+7.132278213" watchObservedRunningTime="2025-11-24 03:15:21.410580994 +0000 UTC m=+7.141930682"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.181192    1445 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.259865    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vxb\" (UniqueName: \"kubernetes.io/projected/f852078e-d93a-4451-87b5-dc786099fe74-kube-api-access-b9vxb\") pod \"storage-provisioner\" (UID: \"f852078e-d93a-4451-87b5-dc786099fe74\") " pod="kube-system/storage-provisioner"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.260082    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/089fe6b1-3d54-44a7-bb14-4d23c7b4b612-config-volume\") pod \"coredns-66bc5c9577-lwlxk\" (UID: \"089fe6b1-3d54-44a7-bb14-4d23c7b4b612\") " pod="kube-system/coredns-66bc5c9577-lwlxk"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.260118    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7p5v\" (UniqueName: \"kubernetes.io/projected/089fe6b1-3d54-44a7-bb14-4d23c7b4b612-kube-api-access-z7p5v\") pod \"coredns-66bc5c9577-lwlxk\" (UID: \"089fe6b1-3d54-44a7-bb14-4d23c7b4b612\") " pod="kube-system/coredns-66bc5c9577-lwlxk"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.260150    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f852078e-d93a-4451-87b5-dc786099fe74-tmp\") pod \"storage-provisioner\" (UID: \"f852078e-d93a-4451-87b5-dc786099fe74\") " pod="kube-system/storage-provisioner"
	Nov 24 03:15:32 embed-certs-427637 kubelet[1445]: I1124 03:15:32.430345    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lwlxk" podStartSLOduration=12.430322141 podStartE2EDuration="12.430322141s" podCreationTimestamp="2025-11-24 03:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:32.429866328 +0000 UTC m=+18.161216018" watchObservedRunningTime="2025-11-24 03:15:32.430322141 +0000 UTC m=+18.161671833"
	Nov 24 03:15:32 embed-certs-427637 kubelet[1445]: I1124 03:15:32.440346    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.440324751 podStartE2EDuration="12.440324751s" podCreationTimestamp="2025-11-24 03:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:32.439720621 +0000 UTC m=+18.171070309" watchObservedRunningTime="2025-11-24 03:15:32.440324751 +0000 UTC m=+18.171674448"
	Nov 24 03:15:34 embed-certs-427637 kubelet[1445]: I1124 03:15:34.380926    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbhm\" (UniqueName: \"kubernetes.io/projected/218931ee-0865-4000-b423-6af3bc31f260-kube-api-access-9rbhm\") pod \"busybox\" (UID: \"218931ee-0865-4000-b423-6af3bc31f260\") " pod="default/busybox"
	Nov 24 03:15:37 embed-certs-427637 kubelet[1445]: I1124 03:15:37.447377    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.380759908 podStartE2EDuration="3.447322965s" podCreationTimestamp="2025-11-24 03:15:34 +0000 UTC" firstStartedPulling="2025-11-24 03:15:34.787334469 +0000 UTC m=+20.518684141" lastFinishedPulling="2025-11-24 03:15:36.853897526 +0000 UTC m=+22.585247198" observedRunningTime="2025-11-24 03:15:37.446723318 +0000 UTC m=+23.178073005" watchObservedRunningTime="2025-11-24 03:15:37.447322965 +0000 UTC m=+23.178672653"
	
	
	==> storage-provisioner [e56e76bbfa118cc06d71064f22f4c4505d29a579e5d600dc5beac2698beb8dd5] <==
	I1124 03:15:31.723198       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:15:31.734356       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:15:31.734408       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:15:31.737068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:31.742578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:15:31.742756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:15:31.742939       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9b2286f0-97af-4b9f-b226-4a9cb4f54a69", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-427637_e8d0e401-4ed3-4e75-81f1-7bee87c7c4b9 became leader
	I1124 03:15:31.743355       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-427637_e8d0e401-4ed3-4e75-81f1-7bee87c7c4b9!
	W1124 03:15:31.746552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:31.755213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:15:31.843557       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-427637_e8d0e401-4ed3-4e75-81f1-7bee87c7c4b9!
	W1124 03:15:33.758868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:33.763150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:35.766984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:35.771300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:37.776243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:37.780569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:39.784574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:39.788988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:41.792878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:41.946851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:43.950864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:43.955355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:45.959380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:45.964477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-427637 -n embed-certs-427637
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-427637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-427637
helpers_test.go:243: (dbg) docker inspect embed-certs-427637:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4",
	        "Created": "2025-11-24T03:14:56.013029284Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276405,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:14:56.063628489Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4/hosts",
	        "LogPath": "/var/lib/docker/containers/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4/1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4-json.log",
	        "Name": "/embed-certs-427637",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-427637:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-427637",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1966880807add64d9626a5fc8369042b8d149a9a4bcda57d380ce24f04c3c0c4",
	                "LowerDir": "/var/lib/docker/overlay2/50396b832abdd5e1ae4a1e8d43d84640d1e73103b450beb3bff6c75ff8be3d1e-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50396b832abdd5e1ae4a1e8d43d84640d1e73103b450beb3bff6c75ff8be3d1e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50396b832abdd5e1ae4a1e8d43d84640d1e73103b450beb3bff6c75ff8be3d1e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50396b832abdd5e1ae4a1e8d43d84640d1e73103b450beb3bff6c75ff8be3d1e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-427637",
	                "Source": "/var/lib/docker/volumes/embed-certs-427637/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-427637",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-427637",
	                "name.minikube.sigs.k8s.io": "embed-certs-427637",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b6412f4b84f77c8156799fd10fa9507d23048da69d6fe3d69bc676ef6eaaf458",
	            "SandboxKey": "/var/run/docker/netns/b6412f4b84f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-427637": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da14df84615929cdce81de230728b5b4ded52dafce00fc44a291a9d383f39244",
	                    "EndpointID": "32a051c6ae17ac1398272ee18bae359ff004687b6c630ca2b11d2f89e64121c8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "52:6a:ee:74:71:df",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-427637",
	                        "1966880807ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-427637 -n embed-certs-427637
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-427637 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-427637 logs -n 25: (1.243261095s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p NoKubernetes-502612 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-502612          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p NoKubernetes-502612                                                                                                                                                                                                                              │ NoKubernetes-502612          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-838815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ stop    │ -p old-k8s-version-838815 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-838815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable metrics-server -p no-preload-182765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ stop    │ -p no-preload-182765 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ image   │ old-k8s-version-838815 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ pause   │ -p old-k8s-version-838815 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ unpause │ -p old-k8s-version-838815 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ delete  │ -p old-k8s-version-838815                                                                                                                                                                                                                           │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable dashboard -p no-preload-182765 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ start   │ -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p old-k8s-version-838815                                                                                                                                                                                                                           │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ start   │ -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p cert-expiration-004045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-004045       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p cert-expiration-004045                                                                                                                                                                                                                           │ cert-expiration-004045       │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p disable-driver-mounts-602172                                                                                                                                                                                                                     │ disable-driver-mounts-602172 │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p default-k8s-diff-port-983163 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-983163 │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ image   │ no-preload-182765 image list --format=json                                                                                                                                                                                                          │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ pause   │ -p no-preload-182765 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:15:37
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:15:37.160310  287103 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:15:37.160589  287103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:15:37.160599  287103 out.go:374] Setting ErrFile to fd 2...
	I1124 03:15:37.160606  287103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:15:37.160898  287103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:15:37.161474  287103 out.go:368] Setting JSON to false
	I1124 03:15:37.163005  287103 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3480,"bootTime":1763950657,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:15:37.163060  287103 start.go:143] virtualization: kvm guest
	I1124 03:15:37.165623  287103 out.go:179] * [kubernetes-upgrade-093930] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:15:37.167612  287103 notify.go:221] Checking for updates...
	I1124 03:15:37.167737  287103 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:15:37.169109  287103 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:15:37.170650  287103 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:15:36.878566  280966 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:15:36.878588  280966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:15:36.878645  280966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-983163
	I1124 03:15:36.880914  280966 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-983163"
	I1124 03:15:36.880959  280966 host.go:66] Checking if "default-k8s-diff-port-983163" exists ...
	I1124 03:15:36.881541  280966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-983163 --format={{.State.Status}}
	I1124 03:15:36.916990  280966 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:15:36.917015  280966 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:15:36.917078  280966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-983163
	I1124 03:15:36.921019  280966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/default-k8s-diff-port-983163/id_rsa Username:docker}
	I1124 03:15:36.948440  280966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33087 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/default-k8s-diff-port-983163/id_rsa Username:docker}
	I1124 03:15:36.977842  280966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:15:37.038147  280966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:15:37.058234  280966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:15:37.075171  280966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:15:37.172745  287103 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 03:15:37.174066  287103 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:15:37.175164  287103 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:15:37.177408  287103 config.go:182] Loaded profile config "kubernetes-upgrade-093930": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:15:37.178133  287103 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:15:37.216832  287103 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:15:37.216963  287103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:15:37.284379  287103 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 03:15:37.274460892 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:15:37.284498  287103 docker.go:319] overlay module found
	I1124 03:15:37.286386  287103 out.go:179] * Using the docker driver based on existing profile
	I1124 03:15:37.287589  287103 start.go:309] selected driver: docker
	I1124 03:15:37.287606  287103 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-093930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-093930 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:15:37.287718  287103 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:15:37.288575  287103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:15:37.354517  287103 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 03:15:37.343748004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:15:37.354835  287103 cni.go:84] Creating CNI manager for ""
	I1124 03:15:37.354986  287103 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:15:37.355090  287103 start.go:353] cluster config:
	{Name:kubernetes-upgrade-093930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-093930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:15:37.357500  287103 out.go:179] * Starting "kubernetes-upgrade-093930" primary control-plane node in "kubernetes-upgrade-093930" cluster
	I1124 03:15:37.358742  287103 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:15:37.360259  287103 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:15:37.361460  287103 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:15:37.361493  287103 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 03:15:37.361504  287103 cache.go:65] Caching tarball of preloaded images
	I1124 03:15:37.361566  287103 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:15:37.361625  287103 preload.go:238] Found /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 03:15:37.361639  287103 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:15:37.361814  287103 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/kubernetes-upgrade-093930/config.json ...
	I1124 03:15:37.387309  287103 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:15:37.387336  287103 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:15:37.387356  287103 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:15:37.387403  287103 start.go:360] acquireMachinesLock for kubernetes-upgrade-093930: {Name:mk48d2551c335008e28757aaafc77c2cf50948b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:15:37.387477  287103 start.go:364] duration metric: took 48.902µs to acquireMachinesLock for "kubernetes-upgrade-093930"
	I1124 03:15:37.387502  287103 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:15:37.387513  287103 fix.go:54] fixHost starting: 
	I1124 03:15:37.387800  287103 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-093930 --format={{.State.Status}}
	I1124 03:15:37.410161  287103 fix.go:112] recreateIfNeeded on kubernetes-upgrade-093930: state=Running err=<nil>
	W1124 03:15:37.410193  287103 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:15:37.180044  280966 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 03:15:37.181238  280966 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-983163" to be "Ready" ...
	I1124 03:15:37.415527  280966 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:15:37.413334  287103 out.go:252] * Updating the running docker "kubernetes-upgrade-093930" container ...
	I1124 03:15:37.413379  287103 machine.go:94] provisionDockerMachine start ...
	I1124 03:15:37.413457  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:37.439966  287103 main.go:143] libmachine: Using SSH client type: native
	I1124 03:15:37.440263  287103 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33022 <nil> <nil>}
	I1124 03:15:37.440278  287103 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:15:37.589544  287103 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-093930
	
	I1124 03:15:37.589580  287103 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-093930"
	I1124 03:15:37.589653  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:37.611390  287103 main.go:143] libmachine: Using SSH client type: native
	I1124 03:15:37.611615  287103 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33022 <nil> <nil>}
	I1124 03:15:37.611637  287103 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-093930 && echo "kubernetes-upgrade-093930" | sudo tee /etc/hostname
	I1124 03:15:37.768981  287103 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-093930
	
	I1124 03:15:37.769060  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:37.792648  287103 main.go:143] libmachine: Using SSH client type: native
	I1124 03:15:37.792983  287103 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33022 <nil> <nil>}
	I1124 03:15:37.793028  287103 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-093930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-093930/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-093930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:15:37.940061  287103 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:15:37.940102  287103 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 03:15:37.940129  287103 ubuntu.go:190] setting up certificates
	I1124 03:15:37.940143  287103 provision.go:84] configureAuth start
	I1124 03:15:37.940204  287103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-093930
	I1124 03:15:37.960416  287103 provision.go:143] copyHostCerts
	I1124 03:15:37.960481  287103 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 03:15:37.960497  287103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 03:15:37.960602  287103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 03:15:37.960748  287103 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 03:15:37.960760  287103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 03:15:37.960839  287103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 03:15:37.960950  287103 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 03:15:37.960963  287103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 03:15:37.961025  287103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 03:15:37.961124  287103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-093930 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-093930 localhost minikube]
	I1124 03:15:37.981489  287103 provision.go:177] copyRemoteCerts
	I1124 03:15:37.981537  287103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:15:37.981566  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.002360  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.103204  287103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:15:38.121875  287103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:15:38.139841  287103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1124 03:15:38.162306  287103 provision.go:87] duration metric: took 222.150529ms to configureAuth
	I1124 03:15:38.162333  287103 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:15:38.162521  287103 config.go:182] Loaded profile config "kubernetes-upgrade-093930": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:15:38.162535  287103 machine.go:97] duration metric: took 749.149471ms to provisionDockerMachine
	I1124 03:15:38.162543  287103 start.go:293] postStartSetup for "kubernetes-upgrade-093930" (driver="docker")
	I1124 03:15:38.162552  287103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:15:38.162606  287103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:15:38.162644  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.195977  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.299790  287103 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:15:38.303385  287103 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:15:38.303416  287103 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:15:38.303428  287103 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 03:15:38.303480  287103 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 03:15:38.303552  287103 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem -> 84292.pem in /etc/ssl/certs
	I1124 03:15:38.303639  287103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:15:38.311506  287103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:15:38.329605  287103 start.go:296] duration metric: took 167.046667ms for postStartSetup
	I1124 03:15:38.329680  287103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:15:38.329727  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.350055  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.447466  287103 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:15:38.452861  287103 fix.go:56] duration metric: took 1.065341914s for fixHost
	I1124 03:15:38.452889  287103 start.go:83] releasing machines lock for "kubernetes-upgrade-093930", held for 1.065397353s
	I1124 03:15:38.452955  287103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-093930
	I1124 03:15:38.471924  287103 ssh_runner.go:195] Run: cat /version.json
	I1124 03:15:38.471970  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.472025  287103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:15:38.472120  287103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-093930
	I1124 03:15:38.493069  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.493568  287103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33022 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/kubernetes-upgrade-093930/id_rsa Username:docker}
	I1124 03:15:38.647014  287103 ssh_runner.go:195] Run: systemctl --version
	I1124 03:15:38.653867  287103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:15:38.659747  287103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:15:38.659844  287103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:15:38.668204  287103 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:15:38.668238  287103 start.go:496] detecting cgroup driver to use...
	I1124 03:15:38.668279  287103 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:15:38.668318  287103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:15:38.683442  287103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:15:38.697554  287103 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:15:38.697622  287103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:15:38.713677  287103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:15:38.726920  287103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:15:38.835190  287103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:15:38.942010  287103 docker.go:234] disabling docker service ...
	I1124 03:15:38.942063  287103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:15:38.957492  287103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:15:38.969978  287103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:15:39.073293  287103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:15:39.179565  287103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:15:39.193754  287103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:15:39.208498  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:15:39.217460  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:15:39.226374  287103 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 03:15:39.226436  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 03:15:39.235703  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:15:39.245586  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:15:39.255036  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:15:39.264469  287103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:15:39.272678  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:15:39.281717  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:15:39.290636  287103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:15:39.299633  287103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:15:39.306955  287103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:15:39.314452  287103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:15:39.421007  287103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:15:39.563483  287103 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:15:39.563563  287103 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:15:39.567752  287103 start.go:564] Will wait 60s for crictl version
	I1124 03:15:39.567824  287103 ssh_runner.go:195] Run: which crictl
	I1124 03:15:39.571455  287103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:15:39.597181  287103 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:15:39.597266  287103 ssh_runner.go:195] Run: containerd --version
	I1124 03:15:39.618448  287103 ssh_runner.go:195] Run: containerd --version
	I1124 03:15:39.642351  287103 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:15:39.643428  287103 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-093930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:15:39.671599  287103 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:15:39.675857  287103 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-093930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-093930 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:15:39.675966  287103 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:15:39.676022  287103 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:15:39.702658  287103 containerd.go:623] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I1124 03:15:39.702721  287103 ssh_runner.go:195] Run: which lz4
	I1124 03:15:39.706795  287103 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 03:15:39.710616  287103 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
	I1124 03:15:39.710635  287103 containerd.go:563] duration metric: took 3.889628ms to copy over tarball
	I1124 03:15:39.710680  287103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 03:15:37.416743  280966 addons.go:530] duration metric: took 568.498016ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:15:37.684712  280966 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-983163" context rescaled to 1 replicas
	W1124 03:15:39.184513  280966 node_ready.go:57] node "default-k8s-diff-port-983163" has "Ready":"False" status (will retry)
	W1124 03:15:41.231214  280966 node_ready.go:57] node "default-k8s-diff-port-983163" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	63d1ed68d8fa7       56cc512116c8f       10 seconds ago      Running             busybox                   0                   f61d3541f9c63       busybox                                      default
	1c5ecefe3510d       52546a367cc9e       15 seconds ago      Running             coredns                   0                   2d9b76a873f45       coredns-66bc5c9577-lwlxk                     kube-system
	e56e76bbfa118       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   c7ac05e4a5431       storage-provisioner                          kube-system
	0c29b1f094f4a       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   07c55aca022d4       kindnet-nbw22                                kube-system
	6ee9232927bad       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   79e01ed043e4a       kube-proxy-nr7h4                             kube-system
	7456a10c919e6       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   de2a8e142b3b3       kube-scheduler-embed-certs-427637            kube-system
	4f08f2d505c46       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   2a2ae758c6c56       kube-controller-manager-embed-certs-427637   kube-system
	b86a90195fd1a       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   c89882ad428b4       kube-apiserver-embed-certs-427637            kube-system
	32fa11b4d353a       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   375f59c4a10c7       etcd-embed-certs-427637                      kube-system
	
	
	==> containerd <==
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.652664959Z" level=info msg="connecting to shim e56e76bbfa118cc06d71064f22f4c4505d29a579e5d600dc5beac2698beb8dd5" address="unix:///run/containerd/s/82130fe4796e7dde376b20730988bc891386a66f7140ea018911bf5c8e0459ad" protocol=ttrpc version=3
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.679335862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lwlxk,Uid:089fe6b1-3d54-44a7-bb14-4d23c7b4b612,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d9b76a873f45989e40afe96cb15d94c3396ad8b3c4c72ac4ed1249d58501dd7\""
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.684415529Z" level=info msg="CreateContainer within sandbox \"2d9b76a873f45989e40afe96cb15d94c3396ad8b3c4c72ac4ed1249d58501dd7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.691674268Z" level=info msg="Container 1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.699492032Z" level=info msg="CreateContainer within sandbox \"2d9b76a873f45989e40afe96cb15d94c3396ad8b3c4c72ac4ed1249d58501dd7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465\""
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.700246169Z" level=info msg="StartContainer for \"1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465\""
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.701533281Z" level=info msg="connecting to shim 1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465" address="unix:///run/containerd/s/1d2d4c547768ada06fd9730869488402420420cde2dab3facfe7510ea12e4a2a" protocol=ttrpc version=3
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.714885308Z" level=info msg="StartContainer for \"e56e76bbfa118cc06d71064f22f4c4505d29a579e5d600dc5beac2698beb8dd5\" returns successfully"
	Nov 24 03:15:31 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:31.767721301Z" level=info msg="StartContainer for \"1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465\" returns successfully"
	Nov 24 03:15:34 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:34.641736245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:218931ee-0865-4000-b423-6af3bc31f260,Namespace:default,Attempt:0,}"
	Nov 24 03:15:34 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:34.699758679Z" level=info msg="connecting to shim f61d3541f9c630709462edbcd6daa9d4bcc8dcb6d6d9ddb8e8e9f090c728af88" address="unix:///run/containerd/s/86965f3576961bc5c5eecd3d71c2ecb146c8349c84e7256e600fc7aca6b072c6" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:15:34 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:34.784737795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:218931ee-0865-4000-b423-6af3bc31f260,Namespace:default,Attempt:0,} returns sandbox id \"f61d3541f9c630709462edbcd6daa9d4bcc8dcb6d6d9ddb8e8e9f090c728af88\""
	Nov 24 03:15:34 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:34.788331416Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.845691174Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.846339608Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.847543074Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.850249865Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.850881683Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.062481771s"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.851285450Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.859905526Z" level=info msg="CreateContainer within sandbox \"f61d3541f9c630709462edbcd6daa9d4bcc8dcb6d6d9ddb8e8e9f090c728af88\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.870043202Z" level=info msg="Container 63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.882058512Z" level=info msg="CreateContainer within sandbox \"f61d3541f9c630709462edbcd6daa9d4bcc8dcb6d6d9ddb8e8e9f090c728af88\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972\""
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.883005733Z" level=info msg="StartContainer for \"63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972\""
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.884110310Z" level=info msg="connecting to shim 63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972" address="unix:///run/containerd/s/86965f3576961bc5c5eecd3d71c2ecb146c8349c84e7256e600fc7aca6b072c6" protocol=ttrpc version=3
	Nov 24 03:15:36 embed-certs-427637 containerd[657]: time="2025-11-24T03:15:36.971436524Z" level=info msg="StartContainer for \"63d1ed68d8fa7da7a8b3f98ed171a56babc498e5e7487102b1a00e24d4c93972\" returns successfully"
	
	
	==> coredns [1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50585 - 3925 "HINFO IN 536952529675684913.1936608164626354691. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.018924135s
	
	
	==> describe nodes <==
	Name:               embed-certs-427637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-427637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-427637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_15_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:15:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-427637
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:15:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:15:45 +0000   Mon, 24 Nov 2025 03:15:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:15:45 +0000   Mon, 24 Nov 2025 03:15:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:15:45 +0000   Mon, 24 Nov 2025 03:15:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:15:45 +0000   Mon, 24 Nov 2025 03:15:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-427637
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                839ea9e0-fcf6-4e2e-8442-8290dbe40da1
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-lwlxk                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-embed-certs-427637                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-nbw22                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-427637             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-embed-certs-427637    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-nr7h4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-427637             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node embed-certs-427637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node embed-certs-427637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node embed-certs-427637 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node embed-certs-427637 event: Registered Node embed-certs-427637 in Controller
	  Normal  NodeReady                16s   kubelet          Node embed-certs-427637 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [32fa11b4d353ac18238716802bf8849023987e1942cfbc93ea1025ed998f28a1] <==
	{"level":"info","ts":"2025-11-24T03:15:12.374418Z","caller":"traceutil/trace.go:172","msg":"trace[433668974] transaction","detail":"{read_only:false; response_revision:20; number_of_response:1; }","duration":"246.863145ms","start":"2025-11-24T03:15:12.127549Z","end":"2025-11-24T03:15:12.374412Z","steps":["trace[433668974] 'process raft request'  (duration: 246.672771ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374463Z","caller":"traceutil/trace.go:172","msg":"trace[137880259] linearizableReadLoop","detail":"{readStateIndex:17; appliedIndex:16; }","duration":"128.511549ms","start":"2025-11-24T03:15:12.245713Z","end":"2025-11-24T03:15:12.374225Z","steps":["trace[137880259] 'read index received'  (duration: 126.979615ms)","trace[137880259] 'applied index is now lower than readState.Index'  (duration: 1.528778ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:12.374526Z","caller":"traceutil/trace.go:172","msg":"trace[704840880] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"251.664331ms","start":"2025-11-24T03:15:12.122855Z","end":"2025-11-24T03:15:12.374519Z","steps":["trace[704840880] 'process raft request'  (duration: 251.389434ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:15:12.374526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.127539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-427637\" limit:1 ","response":"range_response_count:1 size:3566"}
	{"level":"info","ts":"2025-11-24T03:15:12.376023Z","caller":"traceutil/trace.go:172","msg":"trace[201413550] range","detail":"{range_begin:/registry/minions/embed-certs-427637; range_end:; response_count:1; response_revision:23; }","duration":"253.626696ms","start":"2025-11-24T03:15:12.122385Z","end":"2025-11-24T03:15:12.376012Z","steps":["trace[201413550] 'agreement among raft nodes before linearized reading'  (duration: 252.105556ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374544Z","caller":"traceutil/trace.go:172","msg":"trace[292989784] transaction","detail":"{read_only:false; response_revision:15; number_of_response:1; }","duration":"247.47147ms","start":"2025-11-24T03:15:12.127061Z","end":"2025-11-24T03:15:12.374532Z","steps":["trace[292989784] 'process raft request'  (duration: 247.018046ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374554Z","caller":"traceutil/trace.go:172","msg":"trace[339810947] transaction","detail":"{read_only:false; response_revision:17; number_of_response:1; }","duration":"247.200915ms","start":"2025-11-24T03:15:12.127347Z","end":"2025-11-24T03:15:12.374548Z","steps":["trace[339810947] 'process raft request'  (duration: 246.803977ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:15:12.374580Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.927868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-11-24T03:15:12.376610Z","caller":"traceutil/trace.go:172","msg":"trace[87780294] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:23; }","duration":"254.95499ms","start":"2025-11-24T03:15:12.121640Z","end":"2025-11-24T03:15:12.376595Z","steps":["trace[87780294] 'agreement among raft nodes before linearized reading'  (duration: 252.854072ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374606Z","caller":"traceutil/trace.go:172","msg":"trace[1909663951] transaction","detail":"{read_only:false; response_revision:22; number_of_response:1; }","duration":"243.618876ms","start":"2025-11-24T03:15:12.130981Z","end":"2025-11-24T03:15:12.374600Z","steps":["trace[1909663951] 'process raft request'  (duration: 243.306136ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374609Z","caller":"traceutil/trace.go:172","msg":"trace[1041631161] transaction","detail":"{read_only:false; response_revision:16; number_of_response:1; }","duration":"247.516597ms","start":"2025-11-24T03:15:12.127084Z","end":"2025-11-24T03:15:12.374601Z","steps":["trace[1041631161] 'process raft request'  (duration: 247.035488ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.374632Z","caller":"traceutil/trace.go:172","msg":"trace[290592294] transaction","detail":"{read_only:false; response_revision:13; number_of_response:1; }","duration":"253.699ms","start":"2025-11-24T03:15:12.120926Z","end":"2025-11-24T03:15:12.374625Z","steps":["trace[290592294] 'process raft request'  (duration: 124.76671ms)","trace[290592294] 'compare'  (duration: 127.530838ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:12.506735Z","caller":"traceutil/trace.go:172","msg":"trace[1107527160] linearizableReadLoop","detail":"{readStateIndex:27; appliedIndex:27; }","duration":"124.406187ms","start":"2025-11-24T03:15:12.382307Z","end":"2025-11-24T03:15:12.506713Z","steps":["trace[1107527160] 'read index received'  (duration: 124.399059ms)","trace[1107527160] 'applied index is now lower than readState.Index'  (duration: 5.866µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:15:12.599172Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"216.83261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T03:15:12.599248Z","caller":"traceutil/trace.go:172","msg":"trace[1136797137] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:23; }","duration":"216.927158ms","start":"2025-11-24T03:15:12.382303Z","end":"2025-11-24T03:15:12.599230Z","steps":["trace[1136797137] 'agreement among raft nodes before linearized reading'  (duration: 124.493407ms)","trace[1136797137] 'range keys from in-memory index tree'  (duration: 92.294868ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:12.599277Z","caller":"traceutil/trace.go:172","msg":"trace[1462202500] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"218.626214ms","start":"2025-11-24T03:15:12.380633Z","end":"2025-11-24T03:15:12.599259Z","steps":["trace[1462202500] 'process raft request'  (duration: 126.098823ms)","trace[1462202500] 'compare'  (duration: 92.396955ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:12.600013Z","caller":"traceutil/trace.go:172","msg":"trace[10143233] transaction","detail":"{read_only:false; response_revision:31; number_of_response:1; }","duration":"217.42856ms","start":"2025-11-24T03:15:12.382573Z","end":"2025-11-24T03:15:12.600001Z","steps":["trace[10143233] 'process raft request'  (duration: 217.404556ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600036Z","caller":"traceutil/trace.go:172","msg":"trace[447826844] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"218.845898ms","start":"2025-11-24T03:15:12.381177Z","end":"2025-11-24T03:15:12.600023Z","steps":["trace[447826844] 'process raft request'  (duration: 218.58446ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600062Z","caller":"traceutil/trace.go:172","msg":"trace[826399874] transaction","detail":"{read_only:false; response_revision:28; number_of_response:1; }","duration":"217.929279ms","start":"2025-11-24T03:15:12.382124Z","end":"2025-11-24T03:15:12.600053Z","steps":["trace[826399874] 'process raft request'  (duration: 217.783284ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600124Z","caller":"traceutil/trace.go:172","msg":"trace[1073308548] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"218.140941ms","start":"2025-11-24T03:15:12.381977Z","end":"2025-11-24T03:15:12.600118Z","steps":["trace[1073308548] 'process raft request'  (duration: 217.866483ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600157Z","caller":"traceutil/trace.go:172","msg":"trace[401622904] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"217.977395ms","start":"2025-11-24T03:15:12.382170Z","end":"2025-11-24T03:15:12.600147Z","steps":["trace[401622904] 'process raft request'  (duration: 217.781248ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600244Z","caller":"traceutil/trace.go:172","msg":"trace[2106743223] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"218.195707ms","start":"2025-11-24T03:15:12.382040Z","end":"2025-11-24T03:15:12.600236Z","steps":["trace[2106743223] 'process raft request'  (duration: 217.83762ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.600269Z","caller":"traceutil/trace.go:172","msg":"trace[1245880939] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"218.092999ms","start":"2025-11-24T03:15:12.382168Z","end":"2025-11-24T03:15:12.600261Z","steps":["trace[1245880939] 'process raft request'  (duration: 217.761324ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:15:12.836951Z","caller":"traceutil/trace.go:172","msg":"trace[1143838732] transaction","detail":"{read_only:false; response_revision:42; number_of_response:1; }","duration":"169.883367ms","start":"2025-11-24T03:15:12.667035Z","end":"2025-11-24T03:15:12.836918Z","steps":["trace[1143838732] 'process raft request'  (duration: 106.652157ms)","trace[1143838732] 'compare'  (duration: 63.066447ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:15:41.945725Z","caller":"traceutil/trace.go:172","msg":"trace[1590824425] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"150.142414ms","start":"2025-11-24T03:15:41.795561Z","end":"2025-11-24T03:15:41.945703Z","steps":["trace[1590824425] 'process raft request'  (duration: 149.99967ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:15:47 up 58 min,  0 user,  load average: 3.37, 2.91, 2.04
	Linux embed-certs-427637 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c29b1f094f4a1f822553da904f2d9fd85f07fe1685ade3f85d7a1ad29410529] <==
	I1124 03:15:20.951758       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:15:20.952085       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:15:20.952224       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:15:20.952248       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:15:20.952275       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:15:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:15:21.152447       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:15:21.152474       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:15:21.152482       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:15:21.236870       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:15:21.436956       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:15:21.437004       1 metrics.go:72] Registering metrics
	I1124 03:15:21.437080       1 controller.go:711] "Syncing nftables rules"
	I1124 03:15:31.152967       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:15:31.153063       1 main.go:301] handling current node
	I1124 03:15:41.154897       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:15:41.154966       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b86a90195fd1a09eb58b38f26ad5eff53b8fcae105d54dd47c874e892d0342ff] <==
	I1124 03:15:11.961591       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:15:11.961620       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1124 03:15:12.118705       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:12.120093       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1124 03:15:12.121874       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 03:15:12.122101       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:15:12.377727       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:15:12.378868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:12.864412       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:15:12.878285       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:15:12.878318       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:15:13.431622       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:15:13.485491       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:15:13.570733       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:15:13.578896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 03:15:13.580256       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:15:13.585400       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:15:13.890348       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:15:14.515801       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:15:14.526209       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:15:14.532951       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:15:19.698263       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:19.704967       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:19.794612       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:15:19.892726       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4f08f2d505c46cbd0949c947f86ce23acf6de44a1fbea7f5a8f41784e3d9cee7] <==
	I1124 03:15:18.889943       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 03:15:18.889983       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:15:18.890021       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:15:18.890178       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:15:18.890220       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:15:18.890499       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:15:18.890518       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:15:18.890572       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:15:18.890843       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:15:18.891635       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:15:18.891688       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:15:18.891733       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:15:18.892954       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:15:18.893554       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:15:18.894522       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:15:18.894577       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:15:18.894608       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:15:18.894617       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:15:18.894624       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:15:18.900984       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:15:18.900964       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:15:18.903516       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-427637" podCIDRs=["10.244.0.0/24"]
	I1124 03:15:18.906841       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:15:18.913201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:15:33.843717       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6ee9232927baded5b8c1850deba884ba097eb1113f0945bbee245ce7682d2b44] <==
	I1124 03:15:20.472987       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:15:20.560649       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:15:20.661246       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:15:20.661291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 03:15:20.661418       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:15:20.684552       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:15:20.684601       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:15:20.690600       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:15:20.691118       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:15:20.691148       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:15:20.693840       1 config.go:200] "Starting service config controller"
	I1124 03:15:20.693869       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:15:20.693882       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:15:20.693896       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:15:20.693903       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:15:20.693910       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:15:20.694027       1 config.go:309] "Starting node config controller"
	I1124 03:15:20.694040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:15:20.794049       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:15:20.794065       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:15:20.794090       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:15:20.794102       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [7456a10c919e6bc8e366bd8d2615b02ba388d90acda2ba06151b651e16735227] <==
	E1124 03:15:11.914822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:15:11.914872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:15:11.914887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:15:11.914880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:15:11.914956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:15:11.914958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:15:11.915012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:15:11.915039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:15:11.915060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:15:11.915078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:15:11.915505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:15:11.915647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:15:11.915660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:15:11.915820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:15:12.718271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:15:12.856969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:15:12.899338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:15:12.919152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:15:13.034944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:15:13.037098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:15:13.139901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:15:13.217993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:15:13.241136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:15:13.286901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 03:15:16.211274       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:15:15 embed-certs-427637 kubelet[1445]: I1124 03:15:15.414987    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-427637" podStartSLOduration=1.414962278 podStartE2EDuration="1.414962278s" podCreationTimestamp="2025-11-24 03:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:15.400119477 +0000 UTC m=+1.131469166" watchObservedRunningTime="2025-11-24 03:15:15.414962278 +0000 UTC m=+1.146311966"
	Nov 24 03:15:15 embed-certs-427637 kubelet[1445]: I1124 03:15:15.426287    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-427637" podStartSLOduration=1.4262538949999999 podStartE2EDuration="1.426253895s" podCreationTimestamp="2025-11-24 03:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:15.414952146 +0000 UTC m=+1.146301834" watchObservedRunningTime="2025-11-24 03:15:15.426253895 +0000 UTC m=+1.157603582"
	Nov 24 03:15:15 embed-certs-427637 kubelet[1445]: I1124 03:15:15.439455    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-427637" podStartSLOduration=1.43943516 podStartE2EDuration="1.43943516s" podCreationTimestamp="2025-11-24 03:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:15.42655058 +0000 UTC m=+1.157900271" watchObservedRunningTime="2025-11-24 03:15:15.43943516 +0000 UTC m=+1.170784852"
	Nov 24 03:15:15 embed-certs-427637 kubelet[1445]: I1124 03:15:15.462563    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-427637" podStartSLOduration=1.462537597 podStartE2EDuration="1.462537597s" podCreationTimestamp="2025-11-24 03:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:15.439616935 +0000 UTC m=+1.170966619" watchObservedRunningTime="2025-11-24 03:15:15.462537597 +0000 UTC m=+1.193887276"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.002595    1445 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.003412    1445 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981240    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnk75\" (UniqueName: \"kubernetes.io/projected/672c7a34-edc7-4839-9de2-2321574fadc7-kube-api-access-mnk75\") pod \"kindnet-nbw22\" (UID: \"672c7a34-edc7-4839-9de2-2321574fadc7\") " pod="kube-system/kindnet-nbw22"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981302    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/672c7a34-edc7-4839-9de2-2321574fadc7-xtables-lock\") pod \"kindnet-nbw22\" (UID: \"672c7a34-edc7-4839-9de2-2321574fadc7\") " pod="kube-system/kindnet-nbw22"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981331    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/672c7a34-edc7-4839-9de2-2321574fadc7-lib-modules\") pod \"kindnet-nbw22\" (UID: \"672c7a34-edc7-4839-9de2-2321574fadc7\") " pod="kube-system/kindnet-nbw22"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981354    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c973bc1-1945-4ef2-af14-44451934d69b-kube-proxy\") pod \"kube-proxy-nr7h4\" (UID: \"3c973bc1-1945-4ef2-af14-44451934d69b\") " pod="kube-system/kube-proxy-nr7h4"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981379    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlqt\" (UniqueName: \"kubernetes.io/projected/3c973bc1-1945-4ef2-af14-44451934d69b-kube-api-access-fzlqt\") pod \"kube-proxy-nr7h4\" (UID: \"3c973bc1-1945-4ef2-af14-44451934d69b\") " pod="kube-system/kube-proxy-nr7h4"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981421    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/672c7a34-edc7-4839-9de2-2321574fadc7-cni-cfg\") pod \"kindnet-nbw22\" (UID: \"672c7a34-edc7-4839-9de2-2321574fadc7\") " pod="kube-system/kindnet-nbw22"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981443    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c973bc1-1945-4ef2-af14-44451934d69b-xtables-lock\") pod \"kube-proxy-nr7h4\" (UID: \"3c973bc1-1945-4ef2-af14-44451934d69b\") " pod="kube-system/kube-proxy-nr7h4"
	Nov 24 03:15:19 embed-certs-427637 kubelet[1445]: I1124 03:15:19.981463    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c973bc1-1945-4ef2-af14-44451934d69b-lib-modules\") pod \"kube-proxy-nr7h4\" (UID: \"3c973bc1-1945-4ef2-af14-44451934d69b\") " pod="kube-system/kube-proxy-nr7h4"
	Nov 24 03:15:21 embed-certs-427637 kubelet[1445]: I1124 03:15:21.410463    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nbw22" podStartSLOduration=2.410441093 podStartE2EDuration="2.410441093s" podCreationTimestamp="2025-11-24 03:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:21.410347659 +0000 UTC m=+7.141697348" watchObservedRunningTime="2025-11-24 03:15:21.410441093 +0000 UTC m=+7.141790780"
	Nov 24 03:15:21 embed-certs-427637 kubelet[1445]: I1124 03:15:21.410589    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nr7h4" podStartSLOduration=2.410580994 podStartE2EDuration="2.410580994s" podCreationTimestamp="2025-11-24 03:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:21.400928525 +0000 UTC m=+7.132278213" watchObservedRunningTime="2025-11-24 03:15:21.410580994 +0000 UTC m=+7.141930682"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.181192    1445 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.259865    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vxb\" (UniqueName: \"kubernetes.io/projected/f852078e-d93a-4451-87b5-dc786099fe74-kube-api-access-b9vxb\") pod \"storage-provisioner\" (UID: \"f852078e-d93a-4451-87b5-dc786099fe74\") " pod="kube-system/storage-provisioner"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.260082    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/089fe6b1-3d54-44a7-bb14-4d23c7b4b612-config-volume\") pod \"coredns-66bc5c9577-lwlxk\" (UID: \"089fe6b1-3d54-44a7-bb14-4d23c7b4b612\") " pod="kube-system/coredns-66bc5c9577-lwlxk"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.260118    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7p5v\" (UniqueName: \"kubernetes.io/projected/089fe6b1-3d54-44a7-bb14-4d23c7b4b612-kube-api-access-z7p5v\") pod \"coredns-66bc5c9577-lwlxk\" (UID: \"089fe6b1-3d54-44a7-bb14-4d23c7b4b612\") " pod="kube-system/coredns-66bc5c9577-lwlxk"
	Nov 24 03:15:31 embed-certs-427637 kubelet[1445]: I1124 03:15:31.260150    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f852078e-d93a-4451-87b5-dc786099fe74-tmp\") pod \"storage-provisioner\" (UID: \"f852078e-d93a-4451-87b5-dc786099fe74\") " pod="kube-system/storage-provisioner"
	Nov 24 03:15:32 embed-certs-427637 kubelet[1445]: I1124 03:15:32.430345    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lwlxk" podStartSLOduration=12.430322141 podStartE2EDuration="12.430322141s" podCreationTimestamp="2025-11-24 03:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:32.429866328 +0000 UTC m=+18.161216018" watchObservedRunningTime="2025-11-24 03:15:32.430322141 +0000 UTC m=+18.161671833"
	Nov 24 03:15:32 embed-certs-427637 kubelet[1445]: I1124 03:15:32.440346    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.440324751 podStartE2EDuration="12.440324751s" podCreationTimestamp="2025-11-24 03:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:32.439720621 +0000 UTC m=+18.171070309" watchObservedRunningTime="2025-11-24 03:15:32.440324751 +0000 UTC m=+18.171674448"
	Nov 24 03:15:34 embed-certs-427637 kubelet[1445]: I1124 03:15:34.380926    1445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbhm\" (UniqueName: \"kubernetes.io/projected/218931ee-0865-4000-b423-6af3bc31f260-kube-api-access-9rbhm\") pod \"busybox\" (UID: \"218931ee-0865-4000-b423-6af3bc31f260\") " pod="default/busybox"
	Nov 24 03:15:37 embed-certs-427637 kubelet[1445]: I1124 03:15:37.447377    1445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.380759908 podStartE2EDuration="3.447322965s" podCreationTimestamp="2025-11-24 03:15:34 +0000 UTC" firstStartedPulling="2025-11-24 03:15:34.787334469 +0000 UTC m=+20.518684141" lastFinishedPulling="2025-11-24 03:15:36.853897526 +0000 UTC m=+22.585247198" observedRunningTime="2025-11-24 03:15:37.446723318 +0000 UTC m=+23.178073005" watchObservedRunningTime="2025-11-24 03:15:37.447322965 +0000 UTC m=+23.178672653"
	
	
	==> storage-provisioner [e56e76bbfa118cc06d71064f22f4c4505d29a579e5d600dc5beac2698beb8dd5] <==
	I1124 03:15:31.734408       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:15:31.737068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:31.742578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:15:31.742756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:15:31.742939       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9b2286f0-97af-4b9f-b226-4a9cb4f54a69", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-427637_e8d0e401-4ed3-4e75-81f1-7bee87c7c4b9 became leader
	I1124 03:15:31.743355       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-427637_e8d0e401-4ed3-4e75-81f1-7bee87c7c4b9!
	W1124 03:15:31.746552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:31.755213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:15:31.843557       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-427637_e8d0e401-4ed3-4e75-81f1-7bee87c7c4b9!
	W1124 03:15:33.758868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:33.763150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:35.766984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:35.771300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:37.776243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:37.780569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:39.784574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:39.788988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:41.792878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:41.946851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:43.950864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:43.955355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:45.959380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:45.964477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:47.968825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:47.974129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-427637 -n embed-certs-427637
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-427637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-983163 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c58a2189-5a2a-43df-9dab-025a0f79f2aa] Pending
helpers_test.go:352: "busybox" [c58a2189-5a2a-43df-9dab-025a0f79f2aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c58a2189-5a2a-43df-9dab-025a0f79f2aa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005004435s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-983163 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-983163
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-983163:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76",
	        "Created": "2025-11-24T03:15:12.954902195Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282402,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:15:12.991354931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76/hostname",
	        "HostsPath": "/var/lib/docker/containers/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76/hosts",
	        "LogPath": "/var/lib/docker/containers/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76-json.log",
	        "Name": "/default-k8s-diff-port-983163",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-983163:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-983163",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76",
	                "LowerDir": "/var/lib/docker/overlay2/b0f3893bdb488d7f02ccca9073ec640a3fe251b57c95ab76e2bb0f11b8bccc3b-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0f3893bdb488d7f02ccca9073ec640a3fe251b57c95ab76e2bb0f11b8bccc3b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0f3893bdb488d7f02ccca9073ec640a3fe251b57c95ab76e2bb0f11b8bccc3b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0f3893bdb488d7f02ccca9073ec640a3fe251b57c95ab76e2bb0f11b8bccc3b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-983163",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-983163/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-983163",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-983163",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-983163",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3d8018c37e29dca0c575cf870178777a9df4e18df413adba098be056811e58d4",
	            "SandboxKey": "/var/run/docker/netns/3d8018c37e29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-983163": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "024d693698fdde35a22906342d133189292be80e6dce59d8d98f74f5f877be6c",
	                    "EndpointID": "47c0fb15b70a9b7027b544372edbbc54c2b1194b4d68496e7a7b59be8951e9c8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "4a:e4:da:ac:70:3a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-983163",
	                        "cb4836c56784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-983163 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-983163 logs -n 25: (1.3501352s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ pause   │ -p old-k8s-version-838815 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ unpause │ -p old-k8s-version-838815 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ delete  │ -p old-k8s-version-838815                                                                                                                                                                                                                           │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable dashboard -p no-preload-182765 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ start   │ -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p old-k8s-version-838815                                                                                                                                                                                                                           │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ start   │ -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p cert-expiration-004045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-004045       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p cert-expiration-004045                                                                                                                                                                                                                           │ cert-expiration-004045       │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p disable-driver-mounts-602172                                                                                                                                                                                                                     │ disable-driver-mounts-602172 │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p default-k8s-diff-port-983163 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-983163 │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:16 UTC │
	│ start   │ -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ image   │ no-preload-182765 image list --format=json                                                                                                                                                                                                          │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ pause   │ -p no-preload-182765 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ unpause │ -p no-preload-182765 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-427637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ stop    │ -p embed-certs-427637 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:16 UTC │
	│ delete  │ -p no-preload-182765                                                                                                                                                                                                                                │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p no-preload-182765                                                                                                                                                                                                                                │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p newest-cni-531301 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-531301            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-093930                                                                                                                                                                                                                        │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p auto-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-682898                  │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-427637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │ 24 Nov 25 03:16 UTC │
	│ start   │ -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:16:04
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:16:04.564189  296456 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:16:04.564469  296456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:04.564475  296456 out.go:374] Setting ErrFile to fd 2...
	I1124 03:16:04.564482  296456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:04.564809  296456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:16:04.565636  296456 out.go:368] Setting JSON to false
	I1124 03:16:04.566947  296456 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3508,"bootTime":1763950657,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:16:04.567021  296456 start.go:143] virtualization: kvm guest
	I1124 03:16:04.571261  296456 out.go:179] * [embed-certs-427637] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:16:04.572622  296456 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:16:04.574052  296456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:16:04.572639  296456 notify.go:221] Checking for updates...
	I1124 03:16:04.576449  296456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:16:04.577649  296456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 03:16:04.578886  296456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:16:04.580106  296456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:16:04.581982  296456 config.go:182] Loaded profile config "embed-certs-427637": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:16:04.582802  296456 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:16:04.619187  296456 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:16:04.619283  296456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:16:04.703038  296456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:16:04.688574209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:16:04.703162  296456 docker.go:319] overlay module found
	I1124 03:16:04.704722  296456 out.go:179] * Using the docker driver based on existing profile
	I1124 03:16:04.705738  296456 start.go:309] selected driver: docker
	I1124 03:16:04.705754  296456 start.go:927] validating driver "docker" against &{Name:embed-certs-427637 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:16:04.705864  296456 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:16:04.706408  296456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:16:04.780808  296456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:16:04.770554948 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:16:04.781208  296456 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:16:04.781246  296456 cni.go:84] Creating CNI manager for ""
	I1124 03:16:04.781316  296456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:16:04.781374  296456 start.go:353] cluster config:
	{Name:embed-certs-427637 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:16:04.783102  296456 out.go:179] * Starting "embed-certs-427637" primary control-plane node in "embed-certs-427637" cluster
	I1124 03:16:04.783845  296456 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:16:04.785049  296456 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:16:04.786313  296456 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:16:04.786349  296456 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 03:16:04.786361  296456 cache.go:65] Caching tarball of preloaded images
	I1124 03:16:04.786419  296456 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:16:04.786466  296456 preload.go:238] Found /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 03:16:04.786482  296456 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:16:04.786620  296456 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/config.json ...
	I1124 03:16:04.808410  296456 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:16:04.808431  296456 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:16:04.808451  296456 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:16:04.808483  296456 start.go:360] acquireMachinesLock for embed-certs-427637: {Name:mkf67edec8afad055eff25b5939c61a6a43d59be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:16:04.808544  296456 start.go:364] duration metric: took 41.182µs to acquireMachinesLock for "embed-certs-427637"
	I1124 03:16:04.808565  296456 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:16:04.808575  296456 fix.go:54] fixHost starting: 
	I1124 03:16:04.808864  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:04.827837  296456 fix.go:112] recreateIfNeeded on embed-certs-427637: state=Stopped err=<nil>
	W1124 03:16:04.827877  296456 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:16:03.943368  292708 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:16:04.054435  292708 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:16:04.199408  292708 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:16:04.199562  292708 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:16:04.734518  292708 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:16:04.968714  292708 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:16:05.201121  292708 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:16:05.483354  292708 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:16:06.119871  292708 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:16:06.120383  292708 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:16:06.124057  292708 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:16:06.125414  292708 out.go:252]   - Booting up control plane ...
	I1124 03:16:06.125494  292708 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:16:06.125576  292708 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:16:06.126340  292708 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:16:06.140929  292708 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:16:06.141056  292708 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:16:06.147647  292708 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:16:06.147891  292708 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:16:06.147939  292708 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:16:06.251488  292708 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:16:06.251649  292708 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:16:07.253099  292708 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001721573s
	I1124 03:16:07.256282  292708 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:16:07.256401  292708 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:16:07.256562  292708 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:16:07.256679  292708 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:16:03.852308  294601 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-682898:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.313343779s)
	I1124 03:16:03.852358  294601 kic.go:203] duration metric: took 4.313519476s to extract preloaded images to volume ...
	W1124 03:16:03.852456  294601 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:16:03.852504  294601 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:16:03.852561  294601 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:16:03.925549  294601 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-682898 --name auto-682898 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-682898 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-682898 --network auto-682898 --ip 192.168.76.2 --volume auto-682898:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:16:04.271988  294601 cli_runner.go:164] Run: docker container inspect auto-682898 --format={{.State.Running}}
	I1124 03:16:04.295176  294601 cli_runner.go:164] Run: docker container inspect auto-682898 --format={{.State.Status}}
	I1124 03:16:04.320147  294601 cli_runner.go:164] Run: docker exec auto-682898 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:16:04.378690  294601 oci.go:144] the created container "auto-682898" has a running status.
	I1124 03:16:04.378721  294601 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa...
	I1124 03:16:04.462039  294601 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:16:04.489664  294601 cli_runner.go:164] Run: docker container inspect auto-682898 --format={{.State.Status}}
	I1124 03:16:04.511668  294601 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:16:04.511690  294601 kic_runner.go:114] Args: [docker exec --privileged auto-682898 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:16:04.565483  294601 cli_runner.go:164] Run: docker container inspect auto-682898 --format={{.State.Status}}
	I1124 03:16:04.587711  294601 machine.go:94] provisionDockerMachine start ...
	I1124 03:16:04.587812  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:04.616207  294601 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:04.616749  294601 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1124 03:16:04.616767  294601 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:16:04.618577  294601 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49828->127.0.0.1:33097: read: connection reset by peer
	I1124 03:16:07.764707  294601 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-682898
	
	I1124 03:16:07.764733  294601 ubuntu.go:182] provisioning hostname "auto-682898"
	I1124 03:16:07.764827  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:07.791927  294601 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:07.792211  294601 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1124 03:16:07.792237  294601 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-682898 && echo "auto-682898" | sudo tee /etc/hostname
	I1124 03:16:07.959290  294601 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-682898
	
	I1124 03:16:07.959388  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:07.980980  294601 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:07.981249  294601 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1124 03:16:07.981277  294601 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-682898' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-682898/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-682898' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:16:08.121337  294601 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:16:08.121378  294601 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 03:16:08.121400  294601 ubuntu.go:190] setting up certificates
	I1124 03:16:08.121410  294601 provision.go:84] configureAuth start
	I1124 03:16:08.121468  294601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-682898
	I1124 03:16:08.144747  294601 provision.go:143] copyHostCerts
	I1124 03:16:08.144862  294601 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 03:16:08.144878  294601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 03:16:08.144958  294601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 03:16:08.145078  294601 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 03:16:08.145090  294601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 03:16:08.145127  294601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 03:16:08.145204  294601 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 03:16:08.145217  294601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 03:16:08.145249  294601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 03:16:08.145322  294601 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.auto-682898 san=[127.0.0.1 192.168.76.2 auto-682898 localhost minikube]
	I1124 03:16:08.231294  294601 provision.go:177] copyRemoteCerts
	I1124 03:16:08.231352  294601 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:16:08.231390  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.257103  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:08.364421  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:16:08.390344  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:16:08.409002  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 03:16:08.427828  294601 provision.go:87] duration metric: took 306.404518ms to configureAuth
	I1124 03:16:08.427861  294601 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:16:08.428087  294601 config.go:182] Loaded profile config "auto-682898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:16:08.428100  294601 machine.go:97] duration metric: took 3.840367759s to provisionDockerMachine
	I1124 03:16:08.428108  294601 client.go:176] duration metric: took 9.446646086s to LocalClient.Create
	I1124 03:16:08.428133  294601 start.go:167] duration metric: took 9.446717151s to libmachine.API.Create "auto-682898"
	I1124 03:16:08.428150  294601 start.go:293] postStartSetup for "auto-682898" (driver="docker")
	I1124 03:16:08.428161  294601 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:16:08.428211  294601 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:16:08.428266  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.448894  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:08.552691  294601 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:16:08.556915  294601 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:16:08.556942  294601 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:16:08.556955  294601 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 03:16:08.557005  294601 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 03:16:08.557103  294601 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem -> 84292.pem in /etc/ssl/certs
	I1124 03:16:08.557238  294601 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:16:08.565646  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:16:08.587446  294601 start.go:296] duration metric: took 159.280404ms for postStartSetup
	I1124 03:16:08.587845  294601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-682898
	I1124 03:16:08.609093  294601 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/config.json ...
	I1124 03:16:08.609367  294601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:16:08.609434  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.630772  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:08.728242  294601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:16:08.733134  294601 start.go:128] duration metric: took 9.753927474s to createHost
	I1124 03:16:08.733158  294601 start.go:83] releasing machines lock for "auto-682898", held for 9.754062175s
	I1124 03:16:08.733228  294601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-682898
	I1124 03:16:08.752641  294601 ssh_runner.go:195] Run: cat /version.json
	I1124 03:16:08.752683  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.752725  294601 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:16:08.752816  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.777315  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:08.777574  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:04.829552  296456 out.go:252] * Restarting existing docker container for "embed-certs-427637" ...
	I1124 03:16:04.829626  296456 cli_runner.go:164] Run: docker start embed-certs-427637
	I1124 03:16:05.131144  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:05.151228  296456 kic.go:430] container "embed-certs-427637" state is running.
	I1124 03:16:05.151617  296456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-427637
	I1124 03:16:05.171601  296456 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/config.json ...
	I1124 03:16:05.171888  296456 machine.go:94] provisionDockerMachine start ...
	I1124 03:16:05.171960  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:05.191587  296456 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:05.191890  296456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I1124 03:16:05.191903  296456 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:16:05.192464  296456 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58260->127.0.0.1:33102: read: connection reset by peer
	I1124 03:16:08.350224  296456 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-427637
	
	I1124 03:16:08.350258  296456 ubuntu.go:182] provisioning hostname "embed-certs-427637"
	I1124 03:16:08.350320  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:08.373647  296456 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:08.373993  296456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I1124 03:16:08.374013  296456 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-427637 && echo "embed-certs-427637" | sudo tee /etc/hostname
	I1124 03:16:08.534135  296456 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-427637
	
	I1124 03:16:08.534210  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:08.555057  296456 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:08.555342  296456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I1124 03:16:08.555369  296456 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-427637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-427637/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-427637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:16:08.700432  296456 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:16:08.700469  296456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 03:16:08.700519  296456 ubuntu.go:190] setting up certificates
	I1124 03:16:08.700548  296456 provision.go:84] configureAuth start
	I1124 03:16:08.700619  296456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-427637
	I1124 03:16:08.722330  296456 provision.go:143] copyHostCerts
	I1124 03:16:08.722393  296456 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 03:16:08.722412  296456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 03:16:08.722477  296456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 03:16:08.722600  296456 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 03:16:08.722611  296456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 03:16:08.722648  296456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 03:16:08.722736  296456 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 03:16:08.722746  296456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 03:16:08.722884  296456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 03:16:08.722989  296456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.embed-certs-427637 san=[127.0.0.1 192.168.94.2 embed-certs-427637 localhost minikube]
	I1124 03:16:08.780496  296456 provision.go:177] copyRemoteCerts
	I1124 03:16:08.780572  296456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:16:08.780630  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:08.805374  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:08.905846  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:16:08.923283  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:16:08.940540  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:16:08.957622  296456 provision.go:87] duration metric: took 257.055621ms to configureAuth
	I1124 03:16:08.957653  296456 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:16:08.957856  296456 config.go:182] Loaded profile config "embed-certs-427637": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:16:08.957870  296456 machine.go:97] duration metric: took 3.785966028s to provisionDockerMachine
	I1124 03:16:08.957878  296456 start.go:293] postStartSetup for "embed-certs-427637" (driver="docker")
	I1124 03:16:08.957887  296456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:16:08.957933  296456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:16:08.957986  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:08.978456  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:09.080008  296456 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:16:09.083667  296456 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:16:09.083689  296456 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:16:09.083701  296456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 03:16:09.083758  296456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 03:16:09.083870  296456 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem -> 84292.pem in /etc/ssl/certs
	I1124 03:16:09.083957  296456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:16:09.093710  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:16:09.114850  296456 start.go:296] duration metric: took 156.957882ms for postStartSetup
	I1124 03:16:09.114933  296456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:16:09.114980  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:09.134385  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:09.235988  296456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:16:09.241110  296456 fix.go:56] duration metric: took 4.432530609s for fixHost
	I1124 03:16:09.241140  296456 start.go:83] releasing machines lock for "embed-certs-427637", held for 4.43258294s
	I1124 03:16:09.241210  296456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-427637
	I1124 03:16:09.261691  296456 ssh_runner.go:195] Run: cat /version.json
	I1124 03:16:09.261795  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:09.261851  296456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:16:09.261946  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:09.282894  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:09.285381  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:09.457347  296456 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:09.465035  296456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:16:09.470311  296456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:16:09.470394  296456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:16:09.480591  296456 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:16:09.480615  296456 start.go:496] detecting cgroup driver to use...
	I1124 03:16:09.480645  296456 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:16:09.480688  296456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:16:09.501117  296456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:16:09.518130  296456 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:16:09.518208  296456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:16:09.536768  296456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:16:09.551882  296456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:16:08.961492  294601 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:08.968965  294601 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:16:08.974228  294601 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:16:08.974309  294601 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:16:09.000977  294601 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:16:09.001015  294601 start.go:496] detecting cgroup driver to use...
	I1124 03:16:09.001048  294601 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:16:09.001097  294601 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:16:09.016536  294601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:16:09.029135  294601 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:16:09.029184  294601 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:16:09.046468  294601 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:16:09.065271  294601 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:16:09.155398  294601 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:16:09.246680  294601 docker.go:234] disabling docker service ...
	I1124 03:16:09.246823  294601 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:16:09.270369  294601 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:16:09.287049  294601 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:16:09.390734  294601 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:16:09.499094  294601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:16:09.515139  294601 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:16:09.533307  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:16:09.545750  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:16:09.557193  294601 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 03:16:09.557258  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 03:16:09.568899  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:16:09.580236  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:16:09.594178  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:16:09.606877  294601 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:16:09.616975  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:16:09.628448  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:16:09.639541  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:16:09.649768  294601 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:16:09.658215  294601 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:16:09.668342  294601 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:16:09.786044  294601 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:16:09.904966  294601 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:16:09.905038  294601 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:16:09.909557  294601 start.go:564] Will wait 60s for crictl version
	I1124 03:16:09.909628  294601 ssh_runner.go:195] Run: which crictl
	I1124 03:16:09.913357  294601 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:16:09.942245  294601 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:16:09.942308  294601 ssh_runner.go:195] Run: containerd --version
	I1124 03:16:09.973116  294601 ssh_runner.go:195] Run: containerd --version
	I1124 03:16:10.004286  294601 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:16:09.657244  296456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:16:09.783321  296456 docker.go:234] disabling docker service ...
	I1124 03:16:09.783389  296456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:16:09.803157  296456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:16:09.818346  296456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:16:09.927637  296456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:16:10.022909  296456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:16:10.036262  296456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:16:10.052285  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:16:10.063192  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:16:10.073940  296456 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 03:16:10.074015  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 03:16:10.083489  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:16:10.093648  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:16:10.107084  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:16:10.118296  296456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:16:10.127761  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:16:10.137079  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:16:10.146911  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:16:10.157672  296456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:16:10.165551  296456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:16:10.173302  296456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:16:10.281210  296456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:16:10.453432  296456 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:16:10.453493  296456 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:16:10.459809  296456 start.go:564] Will wait 60s for crictl version
	I1124 03:16:10.459941  296456 ssh_runner.go:195] Run: which crictl
	I1124 03:16:10.463825  296456 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:16:10.500062  296456 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:16:10.500149  296456 ssh_runner.go:195] Run: containerd --version
	I1124 03:16:10.531389  296456 ssh_runner.go:195] Run: containerd --version
	I1124 03:16:10.560393  296456 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:16:10.562074  296456 cli_runner.go:164] Run: docker network inspect embed-certs-427637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:16:10.584359  296456 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:16:10.589123  296456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:16:10.599861  296456 kubeadm.go:884] updating cluster {Name:embed-certs-427637 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:16:10.600002  296456 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:16:10.600065  296456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:16:10.629745  296456 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:16:10.629770  296456 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:16:10.629847  296456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:16:10.657491  296456 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:16:10.657514  296456 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:16:10.657523  296456 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1124 03:16:10.657658  296456 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-427637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:16:10.657724  296456 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:16:10.685304  296456 cni.go:84] Creating CNI manager for ""
	I1124 03:16:10.685325  296456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:16:10.685337  296456 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:16:10.685354  296456 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-427637 NodeName:embed-certs-427637 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:16:10.685465  296456 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-427637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:16:10.685531  296456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:16:10.693971  296456 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:16:10.694052  296456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:16:10.701850  296456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 03:16:10.714454  296456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:16:10.726969  296456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1124 03:16:10.739568  296456 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:16:10.743313  296456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:16:10.754348  296456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:16:10.835612  296456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:16:10.857249  296456 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637 for IP: 192.168.94.2
	I1124 03:16:10.857269  296456 certs.go:195] generating shared ca certs ...
	I1124 03:16:10.857286  296456 certs.go:227] acquiring lock for ca certs: {Name:mkd28e9f2e8e31fe23d0ba27851eb0df56d94420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:10.857452  296456 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key
	I1124 03:16:10.857512  296456 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key
	I1124 03:16:10.857526  296456 certs.go:257] generating profile certs ...
	I1124 03:16:10.857627  296456 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/client.key
	I1124 03:16:10.857726  296456 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/apiserver.key.de418b6c
	I1124 03:16:10.857804  296456 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/proxy-client.key
	I1124 03:16:10.857987  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem (1338 bytes)
	W1124 03:16:10.858032  296456 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429_empty.pem, impossibly tiny 0 bytes
	I1124 03:16:10.858043  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:16:10.858079  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:16:10.858158  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:16:10.858208  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem (1679 bytes)
	I1124 03:16:10.858271  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:16:10.859072  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:16:10.878851  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:16:10.898950  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:16:10.918497  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:16:10.944447  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:16:10.966836  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:16:10.987373  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:16:11.004373  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:16:11.021494  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:16:11.039836  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem --> /usr/share/ca-certificates/8429.pem (1338 bytes)
	I1124 03:16:11.059480  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /usr/share/ca-certificates/84292.pem (1708 bytes)
	I1124 03:16:11.080432  296456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:16:11.095555  296456 ssh_runner.go:195] Run: openssl version
	I1124 03:16:11.102316  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:16:11.111177  296456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:16:11.114808  296456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:16:11.114862  296456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:16:11.151286  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:16:11.160550  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8429.pem && ln -fs /usr/share/ca-certificates/8429.pem /etc/ssl/certs/8429.pem"
	I1124 03:16:11.170833  296456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8429.pem
	I1124 03:16:11.174670  296456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/8429.pem
	I1124 03:16:11.174723  296456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8429.pem
	I1124 03:16:11.210853  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8429.pem /etc/ssl/certs/51391683.0"
	I1124 03:16:11.219654  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84292.pem && ln -fs /usr/share/ca-certificates/84292.pem /etc/ssl/certs/84292.pem"
	I1124 03:16:11.228977  296456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84292.pem
	I1124 03:16:11.232728  296456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/84292.pem
	I1124 03:16:11.232792  296456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84292.pem
	I1124 03:16:11.275886  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:16:11.284387  296456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:16:11.288998  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:16:11.335276  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:16:11.384646  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:16:11.445235  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:16:11.512979  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:16:11.575079  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:16:11.639199  296456 kubeadm.go:401] StartCluster: {Name:embed-certs-427637 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:16:11.639350  296456 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:11.639527  296456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:11.720508  296456 cri.go:89] found id: "ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396"
	I1124 03:16:11.720539  296456 cri.go:89] found id: "1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c"
	I1124 03:16:11.720545  296456 cri.go:89] found id: "fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b"
	I1124 03:16:11.720550  296456 cri.go:89] found id: "34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043"
	I1124 03:16:11.720554  296456 cri.go:89] found id: "1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465"
	I1124 03:16:11.720559  296456 cri.go:89] found id: "e56e76bbfa118cc06d71064f22f4c4505d29a579e5d600dc5beac2698beb8dd5"
	I1124 03:16:11.720563  296456 cri.go:89] found id: "0c29b1f094f4a1f822553da904f2d9fd85f07fe1685ade3f85d7a1ad29410529"
	I1124 03:16:11.720566  296456 cri.go:89] found id: "6ee9232927baded5b8c1850deba884ba097eb1113f0945bbee245ce7682d2b44"
	I1124 03:16:11.720570  296456 cri.go:89] found id: "7456a10c919e6bc8e366bd8d2615b02ba388d90acda2ba06151b651e16735227"
	I1124 03:16:11.720580  296456 cri.go:89] found id: "4f08f2d505c46cbd0949c947f86ce23acf6de44a1fbea7f5a8f41784e3d9cee7"
	I1124 03:16:11.720584  296456 cri.go:89] found id: "b86a90195fd1a09eb58b38f26ad5eff53b8fcae105d54dd47c874e892d0342ff"
	I1124 03:16:11.720587  296456 cri.go:89] found id: "32fa11b4d353ac18238716802bf8849023987e1942cfbc93ea1025ed998f28a1"
	I1124 03:16:11.720591  296456 cri.go:89] found id: ""
	I1124 03:16:11.720642  296456 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 03:16:11.781972  296456 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d","pid":788,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d/rootfs","created":"2025-11-24T03:16:11.468011244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-427637_a4ab8ffb6c99a75236ea037883afe25d","io.kubernetes.cri.sandbox-memo
ry":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a4ab8ffb6c99a75236ea037883afe25d"},"owner":"root"},{"ociVersion":"1.2.1","id":"0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805","pid":834,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805/rootfs","created":"2025-11-24T03:16:11.479837341Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805","io.kubernetes
.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-427637_ed6d76621c0cd78dcd5e22dd56ee6e9f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ed6d76621c0cd78dcd5e22dd56ee6e9f"},"owner":"root"},{"ociVersion":"1.2.1","id":"1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c","pid":973,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c/rootfs","created":"2025-11-24T03:16:11.690184452Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"ed85b8aa40
91613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b61c3945d487ad115d9c49c84cf7d890"},"owner":"root"},{"ociVersion":"1.2.1","id":"34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043","pid":932,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043/rootfs","created":"2025-11-24T03:16:11.636873636Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-cer
ts-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a4ab8ffb6c99a75236ea037883afe25d"},"owner":"root"},{"ociVersion":"1.2.1","id":"a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e","pid":862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e/rootfs","created":"2025-11-24T03:16:11.511295854Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-schedu
ler-embed-certs-427637_f766c52874e398cbe2a2e1ace888f34d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f766c52874e398cbe2a2e1ace888f34d"},"owner":"root"},{"ociVersion":"1.2.1","id":"ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396","pid":982,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396/rootfs","created":"2025-11-24T03:16:11.679117423Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e","io.kube
rnetes.cri.sandbox-name":"kube-scheduler-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f766c52874e398cbe2a2e1ace888f34d"},"owner":"root"},{"ociVersion":"1.2.1","id":"ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100","pid":869,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100/rootfs","created":"2025-11-24T03:16:11.515909436Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100","io.kubernetes.cri.sandbox-log
-directory":"/var/log/pods/kube-system_etcd-embed-certs-427637_b61c3945d487ad115d9c49c84cf7d890","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b61c3945d487ad115d9c49c84cf7d890"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b","pid":934,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b/rootfs","created":"2025-11-24T03:16:11.634530826Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"0830e39eeeafab195bf6cfbdde
0c962d7d4b6ecb4414b7844f1e8b2f6e008805","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ed6d76621c0cd78dcd5e22dd56ee6e9f"},"owner":"root"}]
	I1124 03:16:11.782206  296456 cri.go:126] list returned 8 containers
	I1124 03:16:11.782221  296456 cri.go:129] container: {ID:069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d Status:running}
	I1124 03:16:11.782258  296456 cri.go:131] skipping 069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d - not in ps
	I1124 03:16:11.782266  296456 cri.go:129] container: {ID:0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805 Status:running}
	I1124 03:16:11.782274  296456 cri.go:131] skipping 0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805 - not in ps
	I1124 03:16:11.782281  296456 cri.go:129] container: {ID:1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c Status:running}
	I1124 03:16:11.782291  296456 cri.go:135] skipping {1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c running}: state = "running", want "paused"
	I1124 03:16:11.782300  296456 cri.go:129] container: {ID:34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043 Status:running}
	I1124 03:16:11.782309  296456 cri.go:135] skipping {34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043 running}: state = "running", want "paused"
	I1124 03:16:11.782316  296456 cri.go:129] container: {ID:a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e Status:running}
	I1124 03:16:11.782325  296456 cri.go:131] skipping a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e - not in ps
	I1124 03:16:11.782330  296456 cri.go:129] container: {ID:ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396 Status:running}
	I1124 03:16:11.782336  296456 cri.go:135] skipping {ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396 running}: state = "running", want "paused"
	I1124 03:16:11.782342  296456 cri.go:129] container: {ID:ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100 Status:running}
	I1124 03:16:11.782350  296456 cri.go:131] skipping ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100 - not in ps
	I1124 03:16:11.782357  296456 cri.go:129] container: {ID:fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b Status:running}
	I1124 03:16:11.782365  296456 cri.go:135] skipping {fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b running}: state = "running", want "paused"
	I1124 03:16:11.782416  296456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:16:11.806262  296456 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:16:11.806508  296456 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:16:11.806723  296456 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:16:11.856896  296456 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:16:11.859320  296456 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-427637" does not appear in /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:16:11.860409  296456 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-4883/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-427637" cluster setting kubeconfig missing "embed-certs-427637" context setting]
	I1124 03:16:11.861169  296456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:11.863351  296456 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:16:11.885572  296456 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 03:16:11.885640  296456 kubeadm.go:602] duration metric: took 79.057684ms to restartPrimaryControlPlane
	I1124 03:16:11.885651  296456 kubeadm.go:403] duration metric: took 246.462683ms to StartCluster
	I1124 03:16:11.885838  296456 settings.go:142] acquiring lock: {Name:mk05d84efd831d60555ea716cd9d2a0a41871249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:11.885967  296456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:16:11.888677  296456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:11.889389  296456 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:16:11.889456  296456 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:16:11.889890  296456 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-427637"
	I1124 03:16:11.889908  296456 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-427637"
	W1124 03:16:11.889916  296456 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:16:11.889951  296456 host.go:66] Checking if "embed-certs-427637" exists ...
	I1124 03:16:11.890435  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.889646  296456 config.go:182] Loaded profile config "embed-certs-427637": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:16:11.890632  296456 addons.go:70] Setting default-storageclass=true in profile "embed-certs-427637"
	I1124 03:16:11.890698  296456 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-427637"
	I1124 03:16:11.890843  296456 addons.go:70] Setting dashboard=true in profile "embed-certs-427637"
	I1124 03:16:11.890985  296456 addons.go:239] Setting addon dashboard=true in "embed-certs-427637"
	W1124 03:16:11.890995  296456 addons.go:248] addon dashboard should already be in state true
	I1124 03:16:11.891019  296456 host.go:66] Checking if "embed-certs-427637" exists ...
	I1124 03:16:11.891328  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.890923  296456 addons.go:70] Setting metrics-server=true in profile "embed-certs-427637"
	I1124 03:16:11.891604  296456 addons.go:239] Setting addon metrics-server=true in "embed-certs-427637"
	W1124 03:16:11.891621  296456 addons.go:248] addon metrics-server should already be in state true
	I1124 03:16:11.891682  296456 host.go:66] Checking if "embed-certs-427637" exists ...
	I1124 03:16:11.892375  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.893110  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.893315  296456 out.go:179] * Verifying Kubernetes components...
	I1124 03:16:11.895241  296456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:16:11.940883  296456 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:16:11.941050  296456 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:16:11.942382  296456 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:16:11.942400  296456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:16:11.942458  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:11.943685  296456 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:16:11.945014  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:16:11.945034  296456 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:16:11.945097  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:11.951174  296456 addons.go:239] Setting addon default-storageclass=true in "embed-certs-427637"
	W1124 03:16:11.951196  296456 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:16:11.951228  296456 host.go:66] Checking if "embed-certs-427637" exists ...
	I1124 03:16:11.951725  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.965712  296456 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	c09c633c93ab7       56cc512116c8f       7 seconds ago       Running             busybox                   0                   79d48e512afea       busybox                                                default
	47833e056afc1       52546a367cc9e       23 seconds ago      Running             coredns                   0                   22f3e49755732       coredns-66bc5c9577-d78bs                               kube-system
	d5e5ef5586d54       6e38f40d628db       23 seconds ago      Running             storage-provisioner       0                   5762747da2f73       storage-provisioner                                    kube-system
	7beba598dd65a       409467f978b4a       35 seconds ago      Running             kindnet-cni               0                   b2a6fb7a51694       kindnet-b22kj                                          kube-system
	e3f888fa514e5       fc25172553d79       35 seconds ago      Running             kube-proxy                0                   681e0c229dc03       kube-proxy-pdsd5                                       kube-system
	6ab03610fd9a3       c80c8dbafe7dd       47 seconds ago      Running             kube-controller-manager   0                   14d2b509320c1       kube-controller-manager-default-k8s-diff-port-983163   kube-system
	f0dee428c966f       c3994bc696102       47 seconds ago      Running             kube-apiserver            0                   bb403dc0803cb       kube-apiserver-default-k8s-diff-port-983163            kube-system
	9822639bf4a96       7dd6aaa1717ab       47 seconds ago      Running             kube-scheduler            0                   18af15a8467fc       kube-scheduler-default-k8s-diff-port-983163            kube-system
	3499337e0ee82       5f1f5298c888d       47 seconds ago      Running             etcd                      0                   b96a17ac2b1f7       etcd-default-k8s-diff-port-983163                      kube-system
	
	
	==> containerd <==
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.261655059Z" level=info msg="connecting to shim d5e5ef5586d54d7bed7498dc46b356231a0485d014a3808bae84eb7f934910e0" address="unix:///run/containerd/s/6229b7ee0ec68785c5637e85a2337046f991ef509049e734e68d89e51855bde6" protocol=ttrpc version=3
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.284884843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d78bs,Uid:8b371860-34fe-4cb2-99f2-5a6457b82c9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"22f3e497557321424065cdf388e3fd04ebbdd4413e8d56ef62065eae6efcb9ba\""
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.295697506Z" level=info msg="CreateContainer within sandbox \"22f3e497557321424065cdf388e3fd04ebbdd4413e8d56ef62065eae6efcb9ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.304368157Z" level=info msg="Container 47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.311755373Z" level=info msg="CreateContainer within sandbox \"22f3e497557321424065cdf388e3fd04ebbdd4413e8d56ef62065eae6efcb9ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447\""
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.312608335Z" level=info msg="StartContainer for \"47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447\""
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.313911333Z" level=info msg="connecting to shim 47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447" address="unix:///run/containerd/s/ff0dd49adc05ee52dcb6dbc605d86432ab1d75d15c71f69e729cb1debd08edcb" protocol=ttrpc version=3
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.341521656Z" level=info msg="StartContainer for \"d5e5ef5586d54d7bed7498dc46b356231a0485d014a3808bae84eb7f934910e0\" returns successfully"
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.398213048Z" level=info msg="StartContainer for \"47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447\" returns successfully"
	Nov 24 03:16:03 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:03.045960320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c58a2189-5a2a-43df-9dab-025a0f79f2aa,Namespace:default,Attempt:0,}"
	Nov 24 03:16:03 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:03.691767810Z" level=info msg="connecting to shim 79d48e512afea324e55747fb32300c0e5933738863ed8cbd424a353c692a1226" address="unix:///run/containerd/s/d92f351532d74ea0e1dbf5ae2507d5c4b7184d5abf49f0bd2827ce2aa85c095f" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:16:03 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:03.891445971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c58a2189-5a2a-43df-9dab-025a0f79f2aa,Namespace:default,Attempt:0,} returns sandbox id \"79d48e512afea324e55747fb32300c0e5933738863ed8cbd424a353c692a1226\""
	Nov 24 03:16:03 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:03.893909843Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.001348684Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.002134396Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.003216399Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.004852596Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.005423580Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.111358374s"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.005468233Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.009289211Z" level=info msg="CreateContainer within sandbox \"79d48e512afea324e55747fb32300c0e5933738863ed8cbd424a353c692a1226\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.015496006Z" level=info msg="Container c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.020504253Z" level=info msg="CreateContainer within sandbox \"79d48e512afea324e55747fb32300c0e5933738863ed8cbd424a353c692a1226\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777\""
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.021140681Z" level=info msg="StartContainer for \"c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777\""
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.022082014Z" level=info msg="connecting to shim c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777" address="unix:///run/containerd/s/d92f351532d74ea0e1dbf5ae2507d5c4b7184d5abf49f0bd2827ce2aa85c095f" protocol=ttrpc version=3
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.073910386Z" level=info msg="StartContainer for \"c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777\" returns successfully"
	
	
	==> coredns [47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54279 - 56713 "HINFO IN 4735573002917364633.5896329205484484595. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067804178s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-983163
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-983163
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-983163
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_15_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:15:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-983163
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:16:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:15:48 +0000   Mon, 24 Nov 2025 03:15:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:15:48 +0000   Mon, 24 Nov 2025 03:15:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:15:48 +0000   Mon, 24 Nov 2025 03:15:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:15:48 +0000   Mon, 24 Nov 2025 03:15:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-983163
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                ddca803d-d9cd-4899-9051-14cb08d85cbf
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-d78bs                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     37s
	  kube-system                 etcd-default-k8s-diff-port-983163                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-b22kj                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      37s
	  kube-system                 kube-apiserver-default-k8s-diff-port-983163             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-983163    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-pdsd5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-default-k8s-diff-port-983163             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 35s                kube-proxy       
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x7 over 48s)  kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node default-k8s-diff-port-983163 event: Registered Node default-k8s-diff-port-983163 in Controller
	  Normal  NodeReady                25s                kubelet          Node default-k8s-diff-port-983163 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3499337e0ee82a2f81bd5caa1e79e01cff2507b0698469d50af9736a90b933ca] <==
	{"level":"warn","ts":"2025-11-24T03:15:27.612144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.621839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.626677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.634248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.641151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.647407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.654194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.667970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.675097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.681915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.701258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.710614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.718192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.725606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.732864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.740049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.747860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.753691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.774552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.782610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.790773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.851504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:29.939992Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.84401ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790224371133139 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:expand-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:expand-controller\" value_size:655 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T03:15:29.940126Z","caller":"traceutil/trace.go:172","msg":"trace[958052545] transaction","detail":"{read_only:false; response_revision:195; number_of_response:1; }","duration":"258.009958ms","start":"2025-11-24T03:15:29.682099Z","end":"2025-11-24T03:15:29.940109Z","steps":["trace[958052545] 'process raft request'  (duration: 120.580305ms)","trace[958052545] 'compare'  (duration: 136.727923ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:16:02.843698Z","caller":"traceutil/trace.go:172","msg":"trace[1361567151] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"161.317271ms","start":"2025-11-24T03:16:02.682359Z","end":"2025-11-24T03:16:02.843676Z","steps":["trace[1361567151] 'process raft request'  (duration: 94.723851ms)","trace[1361567151] 'compare'  (duration: 66.392043ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:16:13 up 58 min,  0 user,  load average: 5.03, 3.35, 2.21
	Linux default-k8s-diff-port-983163 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7beba598dd65a23f5bc047d323f14dd12e71445a729cd6f29e2c587dae089612] <==
	I1124 03:15:38.146695       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:15:38.147026       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 03:15:38.147173       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:15:38.147194       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:15:38.147229       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:15:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:15:38.347877       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:15:38.347936       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:15:38.347951       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:15:38.445680       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:15:38.809263       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:15:38.809320       1 metrics.go:72] Registering metrics
	I1124 03:15:38.809419       1 controller.go:711] "Syncing nftables rules"
	I1124 03:15:48.349906       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:15:48.349963       1 main.go:301] handling current node
	I1124 03:15:58.354849       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:15:58.354918       1 main.go:301] handling current node
	I1124 03:16:08.348894       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:16:08.348935       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f0dee428c966f47fa114a6190b11d31311ef28bf95bd4181a7a3c7cb9ba1b761] <==
	I1124 03:15:28.409312       1 policy_source.go:240] refreshing policies
	E1124 03:15:28.428364       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 03:15:28.474944       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:15:28.482467       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:15:28.482667       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:28.492466       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:28.492522       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:15:28.578188       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:15:29.277952       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:15:29.281838       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:15:29.281854       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:15:30.143216       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:15:30.181819       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:15:30.283064       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:15:30.289135       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 03:15:30.290164       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:15:30.295414       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:15:30.754513       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:15:31.252424       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:15:31.262615       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:15:31.271605       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:15:35.954553       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:35.958479       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:36.454042       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:15:36.654642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6ab03610fd9a3c11ed53b9af84684605d2fbe3dac58d8504961f02d59de2827c] <==
	I1124 03:15:35.750760       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 03:15:35.750902       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:15:35.750917       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:15:35.751317       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:15:35.751313       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:15:35.751337       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:15:35.751718       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:15:35.751754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:15:35.751831       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:15:35.751831       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:15:35.752542       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:15:35.752610       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:15:35.753701       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 03:15:35.756029       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:15:35.758169       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:15:35.758200       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:15:35.761807       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:15:35.761878       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:15:35.761922       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:15:35.761930       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:15:35.761937       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:15:35.763504       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:15:35.769414       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-983163" podCIDRs=["10.244.0.0/24"]
	I1124 03:15:35.769525       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:15:50.703444       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e3f888fa514e5254d2bc249c2afa1a07e6a99bf5560622158ceec2cf8f131ca5] <==
	I1124 03:15:37.640628       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:15:37.707656       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:15:37.808168       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:15:37.808215       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 03:15:37.808358       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:15:37.830857       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:15:37.830906       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:15:37.836297       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:15:37.836700       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:15:37.836723       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:15:37.838447       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:15:37.838488       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:15:37.838472       1 config.go:200] "Starting service config controller"
	I1124 03:15:37.838569       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:15:37.838584       1 config.go:309] "Starting node config controller"
	I1124 03:15:37.838594       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:15:37.838602       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:15:37.838543       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:15:37.838627       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:15:37.938688       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:15:37.938722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:15:37.938735       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9822639bf4a960eaa781e0de24d0537229b69860c3f8fd7791731f3453b44446] <==
	E1124 03:15:28.371035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:15:28.371122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:15:28.371134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:15:28.371144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:15:28.371314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:15:28.371452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:15:28.371541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:15:28.371553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:15:28.371802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:15:28.371865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:15:28.371620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:15:28.371912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:15:28.371620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:15:28.372010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:15:29.259965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:15:29.270503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:15:29.308664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:15:29.315844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:15:29.320937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:15:29.395691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:15:29.493423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:15:29.519570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:15:29.548835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:15:29.736014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 03:15:31.565758       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551753    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a704097c-5e9c-472c-a33c-74f3b5555277-kube-proxy\") pod \"kube-proxy-pdsd5\" (UID: \"a704097c-5e9c-472c-a33c-74f3b5555277\") " pod="kube-system/kube-proxy-pdsd5"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551839    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c78dfc1-53c3-4d7d-bac5-a57266e63935-xtables-lock\") pod \"kindnet-b22kj\" (UID: \"9c78dfc1-53c3-4d7d-bac5-a57266e63935\") " pod="kube-system/kindnet-b22kj"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551905    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng5tw\" (UniqueName: \"kubernetes.io/projected/a704097c-5e9c-472c-a33c-74f3b5555277-kube-api-access-ng5tw\") pod \"kube-proxy-pdsd5\" (UID: \"a704097c-5e9c-472c-a33c-74f3b5555277\") " pod="kube-system/kube-proxy-pdsd5"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551938    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a704097c-5e9c-472c-a33c-74f3b5555277-xtables-lock\") pod \"kube-proxy-pdsd5\" (UID: \"a704097c-5e9c-472c-a33c-74f3b5555277\") " pod="kube-system/kube-proxy-pdsd5"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551968    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a704097c-5e9c-472c-a33c-74f3b5555277-lib-modules\") pod \"kube-proxy-pdsd5\" (UID: \"a704097c-5e9c-472c-a33c-74f3b5555277\") " pod="kube-system/kube-proxy-pdsd5"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551998    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c78dfc1-53c3-4d7d-bac5-a57266e63935-lib-modules\") pod \"kindnet-b22kj\" (UID: \"9c78dfc1-53c3-4d7d-bac5-a57266e63935\") " pod="kube-system/kindnet-b22kj"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.552019    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbc6x\" (UniqueName: \"kubernetes.io/projected/9c78dfc1-53c3-4d7d-bac5-a57266e63935-kube-api-access-pbc6x\") pod \"kindnet-b22kj\" (UID: \"9c78dfc1-53c3-4d7d-bac5-a57266e63935\") " pod="kube-system/kindnet-b22kj"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.552065    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9c78dfc1-53c3-4d7d-bac5-a57266e63935-cni-cfg\") pod \"kindnet-b22kj\" (UID: \"9c78dfc1-53c3-4d7d-bac5-a57266e63935\") " pod="kube-system/kindnet-b22kj"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660361    1458 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660417    1458 projected.go:196] Error preparing data for projected volume kube-api-access-pbc6x for pod kube-system/kindnet-b22kj: configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660379    1458 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660524    1458 projected.go:196] Error preparing data for projected volume kube-api-access-ng5tw for pod kube-system/kube-proxy-pdsd5: configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660532    1458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c78dfc1-53c3-4d7d-bac5-a57266e63935-kube-api-access-pbc6x podName:9c78dfc1-53c3-4d7d-bac5-a57266e63935 nodeName:}" failed. No retries permitted until 2025-11-24 03:15:37.160481339 +0000 UTC m=+6.150081948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pbc6x" (UniqueName: "kubernetes.io/projected/9c78dfc1-53c3-4d7d-bac5-a57266e63935-kube-api-access-pbc6x") pod "kindnet-b22kj" (UID: "9c78dfc1-53c3-4d7d-bac5-a57266e63935") : configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660592    1458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a704097c-5e9c-472c-a33c-74f3b5555277-kube-api-access-ng5tw podName:a704097c-5e9c-472c-a33c-74f3b5555277 nodeName:}" failed. No retries permitted until 2025-11-24 03:15:37.160568348 +0000 UTC m=+6.150168955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ng5tw" (UniqueName: "kubernetes.io/projected/a704097c-5e9c-472c-a33c-74f3b5555277-kube-api-access-ng5tw") pod "kube-proxy-pdsd5" (UID: "a704097c-5e9c-472c-a33c-74f3b5555277") : configmap "kube-root-ca.crt" not found
	Nov 24 03:15:38 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:38.169750    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-b22kj" podStartSLOduration=2.169727227 podStartE2EDuration="2.169727227s" podCreationTimestamp="2025-11-24 03:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:38.169328174 +0000 UTC m=+7.158928810" watchObservedRunningTime="2025-11-24 03:15:38.169727227 +0000 UTC m=+7.159327836"
	Nov 24 03:15:40 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:40.669936    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pdsd5" podStartSLOduration=4.669911293 podStartE2EDuration="4.669911293s" podCreationTimestamp="2025-11-24 03:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:38.199213964 +0000 UTC m=+7.188814571" watchObservedRunningTime="2025-11-24 03:15:40.669911293 +0000 UTC m=+9.659511902"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.386021    1458 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.536194    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ktch\" (UniqueName: \"kubernetes.io/projected/2da9e6e3-1153-465b-b308-22562c37e66d-kube-api-access-6ktch\") pod \"storage-provisioner\" (UID: \"2da9e6e3-1153-465b-b308-22562c37e66d\") " pod="kube-system/storage-provisioner"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.536444    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b371860-34fe-4cb2-99f2-5a6457b82c9e-config-volume\") pod \"coredns-66bc5c9577-d78bs\" (UID: \"8b371860-34fe-4cb2-99f2-5a6457b82c9e\") " pod="kube-system/coredns-66bc5c9577-d78bs"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.536618    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z566k\" (UniqueName: \"kubernetes.io/projected/8b371860-34fe-4cb2-99f2-5a6457b82c9e-kube-api-access-z566k\") pod \"coredns-66bc5c9577-d78bs\" (UID: \"8b371860-34fe-4cb2-99f2-5a6457b82c9e\") " pod="kube-system/coredns-66bc5c9577-d78bs"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.536681    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2da9e6e3-1153-465b-b308-22562c37e66d-tmp\") pod \"storage-provisioner\" (UID: \"2da9e6e3-1153-465b-b308-22562c37e66d\") " pod="kube-system/storage-provisioner"
	Nov 24 03:15:50 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:50.213966    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d78bs" podStartSLOduration=14.213937341 podStartE2EDuration="14.213937341s" podCreationTimestamp="2025-11-24 03:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:50.211272549 +0000 UTC m=+19.200873159" watchObservedRunningTime="2025-11-24 03:15:50.213937341 +0000 UTC m=+19.203537950"
	Nov 24 03:16:00 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:16:00.208987    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=23.208963108 podStartE2EDuration="23.208963108s" podCreationTimestamp="2025-11-24 03:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:50.238374282 +0000 UTC m=+19.227974891" watchObservedRunningTime="2025-11-24 03:16:00.208963108 +0000 UTC m=+29.198563716"
	Nov 24 03:16:02 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:16:02.725725    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7htjx\" (UniqueName: \"kubernetes.io/projected/c58a2189-5a2a-43df-9dab-025a0f79f2aa-kube-api-access-7htjx\") pod \"busybox\" (UID: \"c58a2189-5a2a-43df-9dab-025a0f79f2aa\") " pod="default/busybox"
	Nov 24 03:16:11 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:16:11.742581    1458 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.103.2:60994->192.168.103.2:10010: write tcp 192.168.103.2:60994->192.168.103.2:10010: write: broken pipe
	
	
	==> storage-provisioner [d5e5ef5586d54d7bed7498dc46b356231a0485d014a3808bae84eb7f934910e0] <==
	I1124 03:15:49.474319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-983163_edaab38d-c23a-4275-be44-98360b0bd353!
	W1124 03:15:51.390833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:51.395878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:53.399560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:53.405622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:55.408692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:55.447423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:57.450526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:57.457406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:59.461045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:59.465087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:01.468883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:01.473645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:03.476349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:03.543018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:05.546523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:05.551011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:07.554691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:07.558606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:09.562025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:09.568108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:11.575259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:11.585787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:13.591758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:13.601041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-983163 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-983163
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-983163:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76",
	        "Created": "2025-11-24T03:15:12.954902195Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282402,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:15:12.991354931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76/hostname",
	        "HostsPath": "/var/lib/docker/containers/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76/hosts",
	        "LogPath": "/var/lib/docker/containers/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76/cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76-json.log",
	        "Name": "/default-k8s-diff-port-983163",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-983163:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-983163",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cb4836c567848b7f26142b20b4abc7b0c8433fc90ca43b2cc5f749a28ff69f76",
	                "LowerDir": "/var/lib/docker/overlay2/b0f3893bdb488d7f02ccca9073ec640a3fe251b57c95ab76e2bb0f11b8bccc3b-init/diff:/var/lib/docker/overlay2/2f5d717ed401f39785659385ff032a177c754c3cfdb9c7e8f0a269ab1990aca3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0f3893bdb488d7f02ccca9073ec640a3fe251b57c95ab76e2bb0f11b8bccc3b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0f3893bdb488d7f02ccca9073ec640a3fe251b57c95ab76e2bb0f11b8bccc3b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0f3893bdb488d7f02ccca9073ec640a3fe251b57c95ab76e2bb0f11b8bccc3b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-983163",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-983163/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-983163",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-983163",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-983163",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3d8018c37e29dca0c575cf870178777a9df4e18df413adba098be056811e58d4",
	            "SandboxKey": "/var/run/docker/netns/3d8018c37e29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-983163": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "024d693698fdde35a22906342d133189292be80e6dce59d8d98f74f5f877be6c",
	                    "EndpointID": "47c0fb15b70a9b7027b544372edbbc54c2b1194b4d68496e7a7b59be8951e9c8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "4a:e4:da:ac:70:3a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-983163",
	                        "cb4836c56784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-983163 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-983163 logs -n 25: (1.169987149s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ pause   │ -p old-k8s-version-838815 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ unpause │ -p old-k8s-version-838815 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ delete  │ -p old-k8s-version-838815                                                                                                                                                                                                                           │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ addons  │ enable dashboard -p no-preload-182765 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ start   │ -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p old-k8s-version-838815                                                                                                                                                                                                                           │ old-k8s-version-838815       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ start   │ -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p cert-expiration-004045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-004045       │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p cert-expiration-004045                                                                                                                                                                                                                           │ cert-expiration-004045       │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p disable-driver-mounts-602172                                                                                                                                                                                                                     │ disable-driver-mounts-602172 │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p default-k8s-diff-port-983163 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-983163 │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:16 UTC │
	│ start   │ -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ image   │ no-preload-182765 image list --format=json                                                                                                                                                                                                          │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ pause   │ -p no-preload-182765 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ unpause │ -p no-preload-182765 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-427637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ stop    │ -p embed-certs-427637 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:16 UTC │
	│ delete  │ -p no-preload-182765                                                                                                                                                                                                                                │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ delete  │ -p no-preload-182765                                                                                                                                                                                                                                │ no-preload-182765            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p newest-cni-531301 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-531301            │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-093930                                                                                                                                                                                                                        │ kubernetes-upgrade-093930    │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │ 24 Nov 25 03:15 UTC │
	│ start   │ -p auto-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-682898                  │ jenkins │ v1.37.0 │ 24 Nov 25 03:15 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-427637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │ 24 Nov 25 03:16 UTC │
	│ start   │ -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-427637           │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:16:04
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:16:04.564189  296456 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:16:04.564469  296456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:04.564475  296456 out.go:374] Setting ErrFile to fd 2...
	I1124 03:16:04.564482  296456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:04.564809  296456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:16:04.565636  296456 out.go:368] Setting JSON to false
	I1124 03:16:04.566947  296456 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3508,"bootTime":1763950657,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:16:04.567021  296456 start.go:143] virtualization: kvm guest
	I1124 03:16:04.571261  296456 out.go:179] * [embed-certs-427637] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:16:04.572622  296456 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:16:04.574052  296456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:16:04.572639  296456 notify.go:221] Checking for updates...
	I1124 03:16:04.576449  296456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:16:04.577649  296456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 03:16:04.578886  296456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:16:04.580106  296456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:16:04.581982  296456 config.go:182] Loaded profile config "embed-certs-427637": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:16:04.582802  296456 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:16:04.619187  296456 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:16:04.619283  296456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:16:04.703038  296456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:16:04.688574209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:16:04.703162  296456 docker.go:319] overlay module found
	I1124 03:16:04.704722  296456 out.go:179] * Using the docker driver based on existing profile
	I1124 03:16:04.705738  296456 start.go:309] selected driver: docker
	I1124 03:16:04.705754  296456 start.go:927] validating driver "docker" against &{Name:embed-certs-427637 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:16:04.705864  296456 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:16:04.706408  296456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:16:04.780808  296456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:16:04.770554948 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:16:04.781208  296456 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:16:04.781246  296456 cni.go:84] Creating CNI manager for ""
	I1124 03:16:04.781316  296456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:16:04.781374  296456 start.go:353] cluster config:
	{Name:embed-certs-427637 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:16:04.783102  296456 out.go:179] * Starting "embed-certs-427637" primary control-plane node in "embed-certs-427637" cluster
	I1124 03:16:04.783845  296456 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:16:04.785049  296456 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:16:04.786313  296456 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:16:04.786349  296456 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 03:16:04.786361  296456 cache.go:65] Caching tarball of preloaded images
	I1124 03:16:04.786419  296456 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:16:04.786466  296456 preload.go:238] Found /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 03:16:04.786482  296456 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:16:04.786620  296456 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/config.json ...
	I1124 03:16:04.808410  296456 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:16:04.808431  296456 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:16:04.808451  296456 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:16:04.808483  296456 start.go:360] acquireMachinesLock for embed-certs-427637: {Name:mkf67edec8afad055eff25b5939c61a6a43d59be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:16:04.808544  296456 start.go:364] duration metric: took 41.182µs to acquireMachinesLock for "embed-certs-427637"
	I1124 03:16:04.808565  296456 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:16:04.808575  296456 fix.go:54] fixHost starting: 
	I1124 03:16:04.808864  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:04.827837  296456 fix.go:112] recreateIfNeeded on embed-certs-427637: state=Stopped err=<nil>
	W1124 03:16:04.827877  296456 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:16:03.943368  292708 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:16:04.054435  292708 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:16:04.199408  292708 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:16:04.199562  292708 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:16:04.734518  292708 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:16:04.968714  292708 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:16:05.201121  292708 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:16:05.483354  292708 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:16:06.119871  292708 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:16:06.120383  292708 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:16:06.124057  292708 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:16:06.125414  292708 out.go:252]   - Booting up control plane ...
	I1124 03:16:06.125494  292708 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:16:06.125576  292708 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:16:06.126340  292708 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:16:06.140929  292708 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:16:06.141056  292708 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:16:06.147647  292708 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:16:06.147891  292708 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:16:06.147939  292708 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:16:06.251488  292708 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:16:06.251649  292708 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:16:07.253099  292708 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001721573s
	I1124 03:16:07.256282  292708 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:16:07.256401  292708 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:16:07.256562  292708 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:16:07.256679  292708 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:16:03.852308  294601 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-682898:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.313343779s)
	I1124 03:16:03.852358  294601 kic.go:203] duration metric: took 4.313519476s to extract preloaded images to volume ...
	W1124 03:16:03.852456  294601 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:16:03.852504  294601 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:16:03.852561  294601 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:16:03.925549  294601 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-682898 --name auto-682898 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-682898 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-682898 --network auto-682898 --ip 192.168.76.2 --volume auto-682898:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:16:04.271988  294601 cli_runner.go:164] Run: docker container inspect auto-682898 --format={{.State.Running}}
	I1124 03:16:04.295176  294601 cli_runner.go:164] Run: docker container inspect auto-682898 --format={{.State.Status}}
	I1124 03:16:04.320147  294601 cli_runner.go:164] Run: docker exec auto-682898 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:16:04.378690  294601 oci.go:144] the created container "auto-682898" has a running status.
	I1124 03:16:04.378721  294601 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa...
	I1124 03:16:04.462039  294601 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:16:04.489664  294601 cli_runner.go:164] Run: docker container inspect auto-682898 --format={{.State.Status}}
	I1124 03:16:04.511668  294601 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:16:04.511690  294601 kic_runner.go:114] Args: [docker exec --privileged auto-682898 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:16:04.565483  294601 cli_runner.go:164] Run: docker container inspect auto-682898 --format={{.State.Status}}
	I1124 03:16:04.587711  294601 machine.go:94] provisionDockerMachine start ...
	I1124 03:16:04.587812  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:04.616207  294601 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:04.616749  294601 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1124 03:16:04.616767  294601 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:16:04.618577  294601 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49828->127.0.0.1:33097: read: connection reset by peer
	I1124 03:16:07.764707  294601 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-682898
	
	I1124 03:16:07.764733  294601 ubuntu.go:182] provisioning hostname "auto-682898"
	I1124 03:16:07.764827  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:07.791927  294601 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:07.792211  294601 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1124 03:16:07.792237  294601 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-682898 && echo "auto-682898" | sudo tee /etc/hostname
	I1124 03:16:07.959290  294601 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-682898
	
	I1124 03:16:07.959388  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:07.980980  294601 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:07.981249  294601 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1124 03:16:07.981277  294601 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-682898' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-682898/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-682898' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:16:08.121337  294601 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:16:08.121378  294601 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 03:16:08.121400  294601 ubuntu.go:190] setting up certificates
	I1124 03:16:08.121410  294601 provision.go:84] configureAuth start
	I1124 03:16:08.121468  294601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-682898
	I1124 03:16:08.144747  294601 provision.go:143] copyHostCerts
	I1124 03:16:08.144862  294601 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 03:16:08.144878  294601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 03:16:08.144958  294601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 03:16:08.145078  294601 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 03:16:08.145090  294601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 03:16:08.145127  294601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 03:16:08.145204  294601 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 03:16:08.145217  294601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 03:16:08.145249  294601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 03:16:08.145322  294601 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.auto-682898 san=[127.0.0.1 192.168.76.2 auto-682898 localhost minikube]
	I1124 03:16:08.231294  294601 provision.go:177] copyRemoteCerts
	I1124 03:16:08.231352  294601 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:16:08.231390  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.257103  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:08.364421  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:16:08.390344  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:16:08.409002  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 03:16:08.427828  294601 provision.go:87] duration metric: took 306.404518ms to configureAuth
	I1124 03:16:08.427861  294601 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:16:08.428087  294601 config.go:182] Loaded profile config "auto-682898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:16:08.428100  294601 machine.go:97] duration metric: took 3.840367759s to provisionDockerMachine
	I1124 03:16:08.428108  294601 client.go:176] duration metric: took 9.446646086s to LocalClient.Create
	I1124 03:16:08.428133  294601 start.go:167] duration metric: took 9.446717151s to libmachine.API.Create "auto-682898"
	I1124 03:16:08.428150  294601 start.go:293] postStartSetup for "auto-682898" (driver="docker")
	I1124 03:16:08.428161  294601 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:16:08.428211  294601 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:16:08.428266  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.448894  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:08.552691  294601 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:16:08.556915  294601 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:16:08.556942  294601 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:16:08.556955  294601 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 03:16:08.557005  294601 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 03:16:08.557103  294601 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem -> 84292.pem in /etc/ssl/certs
	I1124 03:16:08.557238  294601 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:16:08.565646  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:16:08.587446  294601 start.go:296] duration metric: took 159.280404ms for postStartSetup
	I1124 03:16:08.587845  294601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-682898
	I1124 03:16:08.609093  294601 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/config.json ...
	I1124 03:16:08.609367  294601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:16:08.609434  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.630772  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:08.728242  294601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:16:08.733134  294601 start.go:128] duration metric: took 9.753927474s to createHost
	I1124 03:16:08.733158  294601 start.go:83] releasing machines lock for "auto-682898", held for 9.754062175s
	I1124 03:16:08.733228  294601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-682898
	I1124 03:16:08.752641  294601 ssh_runner.go:195] Run: cat /version.json
	I1124 03:16:08.752683  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.752725  294601 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:16:08.752816  294601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-682898
	I1124 03:16:08.777315  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:08.777574  294601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/auto-682898/id_rsa Username:docker}
	I1124 03:16:04.829552  296456 out.go:252] * Restarting existing docker container for "embed-certs-427637" ...
	I1124 03:16:04.829626  296456 cli_runner.go:164] Run: docker start embed-certs-427637
	I1124 03:16:05.131144  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:05.151228  296456 kic.go:430] container "embed-certs-427637" state is running.
	I1124 03:16:05.151617  296456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-427637
	I1124 03:16:05.171601  296456 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/config.json ...
	I1124 03:16:05.171888  296456 machine.go:94] provisionDockerMachine start ...
	I1124 03:16:05.171960  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:05.191587  296456 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:05.191890  296456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I1124 03:16:05.191903  296456 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:16:05.192464  296456 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58260->127.0.0.1:33102: read: connection reset by peer
	I1124 03:16:08.350224  296456 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-427637
	
	I1124 03:16:08.350258  296456 ubuntu.go:182] provisioning hostname "embed-certs-427637"
	I1124 03:16:08.350320  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:08.373647  296456 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:08.373993  296456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I1124 03:16:08.374013  296456 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-427637 && echo "embed-certs-427637" | sudo tee /etc/hostname
	I1124 03:16:08.534135  296456 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-427637
	
	I1124 03:16:08.534210  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:08.555057  296456 main.go:143] libmachine: Using SSH client type: native
	I1124 03:16:08.555342  296456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I1124 03:16:08.555369  296456 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-427637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-427637/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-427637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:16:08.700432  296456 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:16:08.700469  296456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-4883/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-4883/.minikube}
	I1124 03:16:08.700519  296456 ubuntu.go:190] setting up certificates
	I1124 03:16:08.700548  296456 provision.go:84] configureAuth start
	I1124 03:16:08.700619  296456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-427637
	I1124 03:16:08.722330  296456 provision.go:143] copyHostCerts
	I1124 03:16:08.722393  296456 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem, removing ...
	I1124 03:16:08.722412  296456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem
	I1124 03:16:08.722477  296456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/ca.pem (1078 bytes)
	I1124 03:16:08.722600  296456 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem, removing ...
	I1124 03:16:08.722611  296456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem
	I1124 03:16:08.722648  296456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/cert.pem (1123 bytes)
	I1124 03:16:08.722736  296456 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem, removing ...
	I1124 03:16:08.722746  296456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem
	I1124 03:16:08.722884  296456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-4883/.minikube/key.pem (1679 bytes)
	I1124 03:16:08.722989  296456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem org=jenkins.embed-certs-427637 san=[127.0.0.1 192.168.94.2 embed-certs-427637 localhost minikube]
	I1124 03:16:08.780496  296456 provision.go:177] copyRemoteCerts
	I1124 03:16:08.780572  296456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:16:08.780630  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:08.805374  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:08.905846  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:16:08.923283  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:16:08.940540  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:16:08.957622  296456 provision.go:87] duration metric: took 257.055621ms to configureAuth
	I1124 03:16:08.957653  296456 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:16:08.957856  296456 config.go:182] Loaded profile config "embed-certs-427637": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:16:08.957870  296456 machine.go:97] duration metric: took 3.785966028s to provisionDockerMachine
	I1124 03:16:08.957878  296456 start.go:293] postStartSetup for "embed-certs-427637" (driver="docker")
	I1124 03:16:08.957887  296456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:16:08.957933  296456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:16:08.957986  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:08.978456  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:09.080008  296456 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:16:09.083667  296456 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:16:09.083689  296456 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:16:09.083701  296456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/addons for local assets ...
	I1124 03:16:09.083758  296456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-4883/.minikube/files for local assets ...
	I1124 03:16:09.083870  296456 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem -> 84292.pem in /etc/ssl/certs
	I1124 03:16:09.083957  296456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:16:09.093710  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:16:09.114850  296456 start.go:296] duration metric: took 156.957882ms for postStartSetup
	I1124 03:16:09.114933  296456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:16:09.114980  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:09.134385  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:09.235988  296456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:16:09.241110  296456 fix.go:56] duration metric: took 4.432530609s for fixHost
	I1124 03:16:09.241140  296456 start.go:83] releasing machines lock for "embed-certs-427637", held for 4.43258294s
	I1124 03:16:09.241210  296456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-427637
	I1124 03:16:09.261691  296456 ssh_runner.go:195] Run: cat /version.json
	I1124 03:16:09.261795  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:09.261851  296456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:16:09.261946  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:09.282894  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:09.285381  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:09.457347  296456 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:09.465035  296456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:16:09.470311  296456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:16:09.470394  296456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:16:09.480591  296456 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:16:09.480615  296456 start.go:496] detecting cgroup driver to use...
	I1124 03:16:09.480645  296456 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:16:09.480688  296456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:16:09.501117  296456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:16:09.518130  296456 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:16:09.518208  296456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:16:09.536768  296456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:16:09.551882  296456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:16:08.961492  294601 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:08.968965  294601 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:16:08.974228  294601 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:16:08.974309  294601 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:16:09.000977  294601 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:16:09.001015  294601 start.go:496] detecting cgroup driver to use...
	I1124 03:16:09.001048  294601 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:16:09.001097  294601 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:16:09.016536  294601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:16:09.029135  294601 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:16:09.029184  294601 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:16:09.046468  294601 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:16:09.065271  294601 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:16:09.155398  294601 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:16:09.246680  294601 docker.go:234] disabling docker service ...
	I1124 03:16:09.246823  294601 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:16:09.270369  294601 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:16:09.287049  294601 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:16:09.390734  294601 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:16:09.499094  294601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:16:09.515139  294601 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:16:09.533307  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:16:09.545750  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:16:09.557193  294601 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 03:16:09.557258  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 03:16:09.568899  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:16:09.580236  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:16:09.594178  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:16:09.606877  294601 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:16:09.616975  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:16:09.628448  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:16:09.639541  294601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:16:09.649768  294601 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:16:09.658215  294601 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:16:09.668342  294601 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:16:09.786044  294601 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:16:09.904966  294601 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:16:09.905038  294601 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:16:09.909557  294601 start.go:564] Will wait 60s for crictl version
	I1124 03:16:09.909628  294601 ssh_runner.go:195] Run: which crictl
	I1124 03:16:09.913357  294601 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:16:09.942245  294601 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:16:09.942308  294601 ssh_runner.go:195] Run: containerd --version
	I1124 03:16:09.973116  294601 ssh_runner.go:195] Run: containerd --version
	I1124 03:16:10.004286  294601 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:16:09.657244  296456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:16:09.783321  296456 docker.go:234] disabling docker service ...
	I1124 03:16:09.783389  296456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:16:09.803157  296456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:16:09.818346  296456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:16:09.927637  296456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:16:10.022909  296456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:16:10.036262  296456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:16:10.052285  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:16:10.063192  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:16:10.073940  296456 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 03:16:10.074015  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 03:16:10.083489  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:16:10.093648  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:16:10.107084  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:16:10.118296  296456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:16:10.127761  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:16:10.137079  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:16:10.146911  296456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:16:10.157672  296456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:16:10.165551  296456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:16:10.173302  296456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:16:10.281210  296456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:16:10.453432  296456 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:16:10.453493  296456 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:16:10.459809  296456 start.go:564] Will wait 60s for crictl version
	I1124 03:16:10.459941  296456 ssh_runner.go:195] Run: which crictl
	I1124 03:16:10.463825  296456 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:16:10.500062  296456 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:16:10.500149  296456 ssh_runner.go:195] Run: containerd --version
	I1124 03:16:10.531389  296456 ssh_runner.go:195] Run: containerd --version
	I1124 03:16:10.560393  296456 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:16:10.562074  296456 cli_runner.go:164] Run: docker network inspect embed-certs-427637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:16:10.584359  296456 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:16:10.589123  296456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:16:10.599861  296456 kubeadm.go:884] updating cluster {Name:embed-certs-427637 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:16:10.600002  296456 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:16:10.600065  296456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:16:10.629745  296456 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:16:10.629770  296456 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:16:10.629847  296456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:16:10.657491  296456 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:16:10.657514  296456 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:16:10.657523  296456 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1124 03:16:10.657658  296456 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-427637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:16:10.657724  296456 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:16:10.685304  296456 cni.go:84] Creating CNI manager for ""
	I1124 03:16:10.685325  296456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:16:10.685337  296456 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:16:10.685354  296456 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-427637 NodeName:embed-certs-427637 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:16:10.685465  296456 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-427637"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:16:10.685531  296456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:16:10.693971  296456 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:16:10.694052  296456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:16:10.701850  296456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 03:16:10.714454  296456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:16:10.726969  296456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1124 03:16:10.739568  296456 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:16:10.743313  296456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:16:10.754348  296456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:16:10.835612  296456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:16:10.857249  296456 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637 for IP: 192.168.94.2
	I1124 03:16:10.857269  296456 certs.go:195] generating shared ca certs ...
	I1124 03:16:10.857286  296456 certs.go:227] acquiring lock for ca certs: {Name:mkd28e9f2e8e31fe23d0ba27851eb0df56d94420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:10.857452  296456 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key
	I1124 03:16:10.857512  296456 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key
	I1124 03:16:10.857526  296456 certs.go:257] generating profile certs ...
	I1124 03:16:10.857627  296456 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/client.key
	I1124 03:16:10.857726  296456 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/apiserver.key.de418b6c
	I1124 03:16:10.857804  296456 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/proxy-client.key
	I1124 03:16:10.857987  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem (1338 bytes)
	W1124 03:16:10.858032  296456 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429_empty.pem, impossibly tiny 0 bytes
	I1124 03:16:10.858043  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:16:10.858079  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:16:10.858158  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:16:10.858208  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem (1679 bytes)
	I1124 03:16:10.858271  296456 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:16:10.859072  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:16:10.878851  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:16:10.898950  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:16:10.918497  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:16:10.944447  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:16:10.966836  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:16:10.987373  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:16:11.004373  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/embed-certs-427637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:16:11.021494  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:16:11.039836  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem --> /usr/share/ca-certificates/8429.pem (1338 bytes)
	I1124 03:16:11.059480  296456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /usr/share/ca-certificates/84292.pem (1708 bytes)
	I1124 03:16:11.080432  296456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:16:11.095555  296456 ssh_runner.go:195] Run: openssl version
	I1124 03:16:11.102316  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:16:11.111177  296456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:16:11.114808  296456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:16:11.114862  296456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:16:11.151286  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:16:11.160550  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8429.pem && ln -fs /usr/share/ca-certificates/8429.pem /etc/ssl/certs/8429.pem"
	I1124 03:16:11.170833  296456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8429.pem
	I1124 03:16:11.174670  296456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/8429.pem
	I1124 03:16:11.174723  296456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8429.pem
	I1124 03:16:11.210853  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8429.pem /etc/ssl/certs/51391683.0"
	I1124 03:16:11.219654  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84292.pem && ln -fs /usr/share/ca-certificates/84292.pem /etc/ssl/certs/84292.pem"
	I1124 03:16:11.228977  296456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84292.pem
	I1124 03:16:11.232728  296456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/84292.pem
	I1124 03:16:11.232792  296456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84292.pem
	I1124 03:16:11.275886  296456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:16:11.284387  296456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:16:11.288998  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:16:11.335276  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:16:11.384646  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:16:11.445235  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:16:11.512979  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:16:11.575079  296456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:16:11.639199  296456 kubeadm.go:401] StartCluster: {Name:embed-certs-427637 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-427637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:16:11.639350  296456 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:11.639527  296456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:11.720508  296456 cri.go:89] found id: "ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396"
	I1124 03:16:11.720539  296456 cri.go:89] found id: "1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c"
	I1124 03:16:11.720545  296456 cri.go:89] found id: "fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b"
	I1124 03:16:11.720550  296456 cri.go:89] found id: "34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043"
	I1124 03:16:11.720554  296456 cri.go:89] found id: "1c5ecefe3510d0c7d765dc59cc7bc74f67fb8c6a16a67bc2ea72265adbf79465"
	I1124 03:16:11.720559  296456 cri.go:89] found id: "e56e76bbfa118cc06d71064f22f4c4505d29a579e5d600dc5beac2698beb8dd5"
	I1124 03:16:11.720563  296456 cri.go:89] found id: "0c29b1f094f4a1f822553da904f2d9fd85f07fe1685ade3f85d7a1ad29410529"
	I1124 03:16:11.720566  296456 cri.go:89] found id: "6ee9232927baded5b8c1850deba884ba097eb1113f0945bbee245ce7682d2b44"
	I1124 03:16:11.720570  296456 cri.go:89] found id: "7456a10c919e6bc8e366bd8d2615b02ba388d90acda2ba06151b651e16735227"
	I1124 03:16:11.720580  296456 cri.go:89] found id: "4f08f2d505c46cbd0949c947f86ce23acf6de44a1fbea7f5a8f41784e3d9cee7"
	I1124 03:16:11.720584  296456 cri.go:89] found id: "b86a90195fd1a09eb58b38f26ad5eff53b8fcae105d54dd47c874e892d0342ff"
	I1124 03:16:11.720587  296456 cri.go:89] found id: "32fa11b4d353ac18238716802bf8849023987e1942cfbc93ea1025ed998f28a1"
	I1124 03:16:11.720591  296456 cri.go:89] found id: ""
	I1124 03:16:11.720642  296456 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 03:16:11.781972  296456 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d","pid":788,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d/rootfs","created":"2025-11-24T03:16:11.468011244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-427637_a4ab8ffb6c99a75236ea037883afe25d","io.kubernetes.cri.sandbox-memo
ry":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a4ab8ffb6c99a75236ea037883afe25d"},"owner":"root"},{"ociVersion":"1.2.1","id":"0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805","pid":834,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805/rootfs","created":"2025-11-24T03:16:11.479837341Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805","io.kubernetes
.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-427637_ed6d76621c0cd78dcd5e22dd56ee6e9f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ed6d76621c0cd78dcd5e22dd56ee6e9f"},"owner":"root"},{"ociVersion":"1.2.1","id":"1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c","pid":973,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c/rootfs","created":"2025-11-24T03:16:11.690184452Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"ed85b8aa40
91613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b61c3945d487ad115d9c49c84cf7d890"},"owner":"root"},{"ociVersion":"1.2.1","id":"34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043","pid":932,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043/rootfs","created":"2025-11-24T03:16:11.636873636Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-cer
ts-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a4ab8ffb6c99a75236ea037883afe25d"},"owner":"root"},{"ociVersion":"1.2.1","id":"a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e","pid":862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e/rootfs","created":"2025-11-24T03:16:11.511295854Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-schedu
ler-embed-certs-427637_f766c52874e398cbe2a2e1ace888f34d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f766c52874e398cbe2a2e1ace888f34d"},"owner":"root"},{"ociVersion":"1.2.1","id":"ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396","pid":982,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396/rootfs","created":"2025-11-24T03:16:11.679117423Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e","io.kube
rnetes.cri.sandbox-name":"kube-scheduler-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f766c52874e398cbe2a2e1ace888f34d"},"owner":"root"},{"ociVersion":"1.2.1","id":"ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100","pid":869,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100/rootfs","created":"2025-11-24T03:16:11.515909436Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100","io.kubernetes.cri.sandbox-log
-directory":"/var/log/pods/kube-system_etcd-embed-certs-427637_b61c3945d487ad115d9c49c84cf7d890","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b61c3945d487ad115d9c49c84cf7d890"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b","pid":934,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b/rootfs","created":"2025-11-24T03:16:11.634530826Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"0830e39eeeafab195bf6cfbdde
0c962d7d4b6ecb4414b7844f1e8b2f6e008805","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-427637","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ed6d76621c0cd78dcd5e22dd56ee6e9f"},"owner":"root"}]
	I1124 03:16:11.782206  296456 cri.go:126] list returned 8 containers
	I1124 03:16:11.782221  296456 cri.go:129] container: {ID:069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d Status:running}
	I1124 03:16:11.782258  296456 cri.go:131] skipping 069e91e7ece23c8b1a34f8a74b4d2250f73893f2ecdf34773e8ccfb36206811d - not in ps
	I1124 03:16:11.782266  296456 cri.go:129] container: {ID:0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805 Status:running}
	I1124 03:16:11.782274  296456 cri.go:131] skipping 0830e39eeeafab195bf6cfbdde0c962d7d4b6ecb4414b7844f1e8b2f6e008805 - not in ps
	I1124 03:16:11.782281  296456 cri.go:129] container: {ID:1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c Status:running}
	I1124 03:16:11.782291  296456 cri.go:135] skipping {1905ba415bc32ade6726b9e73ded61d94eea5952320b4ec7490cccea3bdd8e5c running}: state = "running", want "paused"
	I1124 03:16:11.782300  296456 cri.go:129] container: {ID:34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043 Status:running}
	I1124 03:16:11.782309  296456 cri.go:135] skipping {34db4524c1971bf9cb9799bfca61fa40491c77969833d8604733a99d27d41043 running}: state = "running", want "paused"
	I1124 03:16:11.782316  296456 cri.go:129] container: {ID:a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e Status:running}
	I1124 03:16:11.782325  296456 cri.go:131] skipping a587a88afb3755b254c0d89ed30285e39b6f0a60f13ea5102dd1ded44b02bf2e - not in ps
	I1124 03:16:11.782330  296456 cri.go:129] container: {ID:ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396 Status:running}
	I1124 03:16:11.782336  296456 cri.go:135] skipping {ad3e39fa3b1eb2303ca9a61e021329077bb6a42757d9867ea44286c63a41b396 running}: state = "running", want "paused"
	I1124 03:16:11.782342  296456 cri.go:129] container: {ID:ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100 Status:running}
	I1124 03:16:11.782350  296456 cri.go:131] skipping ed85b8aa4091613ff2ed6855dc684689c92bf583c0818b7f93c3344de262f100 - not in ps
	I1124 03:16:11.782357  296456 cri.go:129] container: {ID:fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b Status:running}
	I1124 03:16:11.782365  296456 cri.go:135] skipping {fe38457a72ea7fc882c58b85369848bf57d18aefe81105f137666578a02d6e0b running}: state = "running", want "paused"
	I1124 03:16:11.782416  296456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:16:11.806262  296456 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:16:11.806508  296456 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:16:11.806723  296456 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:16:11.856896  296456 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:16:11.859320  296456 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-427637" does not appear in /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:16:11.860409  296456 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-4883/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-427637" cluster setting kubeconfig missing "embed-certs-427637" context setting]
	I1124 03:16:11.861169  296456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:11.863351  296456 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:16:11.885572  296456 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 03:16:11.885640  296456 kubeadm.go:602] duration metric: took 79.057684ms to restartPrimaryControlPlane
	I1124 03:16:11.885651  296456 kubeadm.go:403] duration metric: took 246.462683ms to StartCluster
	I1124 03:16:11.885838  296456 settings.go:142] acquiring lock: {Name:mk05d84efd831d60555ea716cd9d2a0a41871249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:11.885967  296456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:16:11.888677  296456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/kubeconfig: {Name:mkf99f016b653afd282cf36d34d1cc32c34d90de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:11.889389  296456 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:16:11.889456  296456 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:16:11.889890  296456 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-427637"
	I1124 03:16:11.889908  296456 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-427637"
	W1124 03:16:11.889916  296456 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:16:11.889951  296456 host.go:66] Checking if "embed-certs-427637" exists ...
	I1124 03:16:11.890435  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.889646  296456 config.go:182] Loaded profile config "embed-certs-427637": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:16:11.890632  296456 addons.go:70] Setting default-storageclass=true in profile "embed-certs-427637"
	I1124 03:16:11.890698  296456 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-427637"
	I1124 03:16:11.890843  296456 addons.go:70] Setting dashboard=true in profile "embed-certs-427637"
	I1124 03:16:11.890985  296456 addons.go:239] Setting addon dashboard=true in "embed-certs-427637"
	W1124 03:16:11.890995  296456 addons.go:248] addon dashboard should already be in state true
	I1124 03:16:11.891019  296456 host.go:66] Checking if "embed-certs-427637" exists ...
	I1124 03:16:11.891328  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.890923  296456 addons.go:70] Setting metrics-server=true in profile "embed-certs-427637"
	I1124 03:16:11.891604  296456 addons.go:239] Setting addon metrics-server=true in "embed-certs-427637"
	W1124 03:16:11.891621  296456 addons.go:248] addon metrics-server should already be in state true
	I1124 03:16:11.891682  296456 host.go:66] Checking if "embed-certs-427637" exists ...
	I1124 03:16:11.892375  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.893110  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.893315  296456 out.go:179] * Verifying Kubernetes components...
	I1124 03:16:11.895241  296456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:16:11.940883  296456 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:16:11.941050  296456 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:16:11.942382  296456 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:16:11.942400  296456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:16:11.942458  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:11.943685  296456 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:16:11.945014  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:16:11.945034  296456 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:16:11.945097  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:11.951174  296456 addons.go:239] Setting addon default-storageclass=true in "embed-certs-427637"
	W1124 03:16:11.951196  296456 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:16:11.951228  296456 host.go:66] Checking if "embed-certs-427637" exists ...
	I1124 03:16:11.951725  296456 cli_runner.go:164] Run: docker container inspect embed-certs-427637 --format={{.State.Status}}
	I1124 03:16:11.965712  296456 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 03:16:08.994892  292708 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.738466304s
	I1124 03:16:10.458017  292708 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.201666956s
	I1124 03:16:12.759692  292708 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.503216682s
	I1124 03:16:12.776901  292708 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:16:12.791960  292708 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:16:12.805665  292708 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:16:12.806302  292708 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-531301 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:16:12.816650  292708 kubeadm.go:319] [bootstrap-token] Using token: 4hgsom.aqbsuq2onqwyj901
	I1124 03:16:12.818162  292708 out.go:252]   - Configuring RBAC rules ...
	I1124 03:16:12.818385  292708 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:16:12.824889  292708 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:16:12.831523  292708 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:16:12.835140  292708 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:16:12.839094  292708 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:16:12.843363  292708 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:16:13.167152  292708 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:16:13.592979  292708 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:16:10.005564  294601 cli_runner.go:164] Run: docker network inspect auto-682898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:16:10.023433  294601 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:16:10.027523  294601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:16:10.038300  294601 kubeadm.go:884] updating cluster {Name:auto-682898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-682898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:16:10.038434  294601 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:16:10.038502  294601 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:16:10.066668  294601 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:16:10.066694  294601 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:16:10.066754  294601 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:16:10.092663  294601 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:16:10.092689  294601 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:16:10.092701  294601 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 03:16:10.092835  294601 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-682898 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-682898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:16:10.092911  294601 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:16:10.125416  294601 cni.go:84] Creating CNI manager for ""
	I1124 03:16:10.125442  294601 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:16:10.125461  294601 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:16:10.125490  294601 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-682898 NodeName:auto-682898 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:16:10.125674  294601 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-682898"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:16:10.125753  294601 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:16:10.134035  294601 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:16:10.134095  294601 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:16:10.142000  294601 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1124 03:16:10.156290  294601 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:16:10.172375  294601 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1124 03:16:10.185258  294601 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:16:10.189177  294601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:16:10.206268  294601 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:16:10.323839  294601 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:16:10.351658  294601 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898 for IP: 192.168.76.2
	I1124 03:16:10.351675  294601 certs.go:195] generating shared ca certs ...
	I1124 03:16:10.351687  294601 certs.go:227] acquiring lock for ca certs: {Name:mkd28e9f2e8e31fe23d0ba27851eb0df56d94420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:10.351844  294601 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key
	I1124 03:16:10.351909  294601 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key
	I1124 03:16:10.351925  294601 certs.go:257] generating profile certs ...
	I1124 03:16:10.351988  294601 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/client.key
	I1124 03:16:10.352024  294601 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/client.crt with IP's: []
	I1124 03:16:10.469924  294601 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/client.crt ...
	I1124 03:16:10.470013  294601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/client.crt: {Name:mk90e91a6bcfbcca85d5438b3b69dca3b1bc4dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:10.470184  294601 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/client.key ...
	I1124 03:16:10.470214  294601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/client.key: {Name:mk3cddeb8e70b297c112985c9924c5069b10b72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:10.470312  294601 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.key.1d1a1320
	I1124 03:16:10.470338  294601 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.crt.1d1a1320 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:16:10.606894  294601 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.crt.1d1a1320 ...
	I1124 03:16:10.606916  294601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.crt.1d1a1320: {Name:mk06d57c48ab1d9d71d5f9d30ae4b04592bba31a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:10.607086  294601 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.key.1d1a1320 ...
	I1124 03:16:10.607102  294601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.key.1d1a1320: {Name:mka1403156eb9c1e814356dcbfe6ec05fde048f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:10.607208  294601 certs.go:382] copying /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.crt.1d1a1320 -> /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.crt
	I1124 03:16:10.607306  294601 certs.go:386] copying /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.key.1d1a1320 -> /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.key
	I1124 03:16:10.607366  294601 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/proxy-client.key
	I1124 03:16:10.607380  294601 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/proxy-client.crt with IP's: []
	I1124 03:16:10.786501  294601 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/proxy-client.crt ...
	I1124 03:16:10.786537  294601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/proxy-client.crt: {Name:mk473d1a1d1fb2cd72e6eb42f0bb8ccc1a9affd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:10.786722  294601 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/proxy-client.key ...
	I1124 03:16:10.786737  294601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/proxy-client.key: {Name:mk3975d317219e367bc6b925b5896e2bc0fe7662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:16:10.787003  294601 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem (1338 bytes)
	W1124 03:16:10.787055  294601 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429_empty.pem, impossibly tiny 0 bytes
	I1124 03:16:10.787065  294601 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:16:10.787101  294601 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:16:10.787131  294601 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:16:10.787161  294601 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/certs/key.pem (1679 bytes)
	I1124 03:16:10.787218  294601 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem (1708 bytes)
	I1124 03:16:10.788043  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:16:10.806538  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:16:10.824288  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:16:10.842173  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:16:10.861228  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 03:16:10.879970  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:16:10.899875  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:16:10.919602  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/auto-682898/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:16:10.943927  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/ssl/certs/84292.pem --> /usr/share/ca-certificates/84292.pem (1708 bytes)
	I1124 03:16:10.971148  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:16:10.990844  294601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-4883/.minikube/certs/8429.pem --> /usr/share/ca-certificates/8429.pem (1338 bytes)
	I1124 03:16:11.008174  294601 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:16:11.020640  294601 ssh_runner.go:195] Run: openssl version
	I1124 03:16:11.027298  294601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84292.pem && ln -fs /usr/share/ca-certificates/84292.pem /etc/ssl/certs/84292.pem"
	I1124 03:16:11.035733  294601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84292.pem
	I1124 03:16:11.039494  294601 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/84292.pem
	I1124 03:16:11.039551  294601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84292.pem
	I1124 03:16:11.087082  294601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84292.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:16:11.096502  294601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:16:11.105550  294601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:16:11.109232  294601 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:16:11.109292  294601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:16:11.146829  294601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:16:11.156905  294601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8429.pem && ln -fs /usr/share/ca-certificates/8429.pem /etc/ssl/certs/8429.pem"
	I1124 03:16:11.166351  294601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8429.pem
	I1124 03:16:11.170714  294601 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/8429.pem
	I1124 03:16:11.170771  294601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8429.pem
	I1124 03:16:11.206796  294601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8429.pem /etc/ssl/certs/51391683.0"
	I1124 03:16:11.215755  294601 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:16:11.219489  294601 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:16:11.219550  294601 kubeadm.go:401] StartCluster: {Name:auto-682898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-682898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:16:11.219648  294601 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:11.219701  294601 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:11.249325  294601 cri.go:89] found id: ""
	I1124 03:16:11.249399  294601 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:16:11.258655  294601 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:16:11.266915  294601 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:16:11.266982  294601 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:16:11.275285  294601 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:16:11.275311  294601 kubeadm.go:158] found existing configuration files:
	
	I1124 03:16:11.275356  294601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:16:11.283917  294601 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:16:11.283982  294601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:16:11.292896  294601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:16:11.301876  294601 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:16:11.301950  294601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:16:11.310697  294601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:16:11.320042  294601 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:16:11.320105  294601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:16:11.329182  294601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:16:11.338927  294601 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:16:11.338990  294601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:16:11.348132  294601 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:16:11.440240  294601 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:16:11.535487  294601 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:16:14.173503  292708 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:16:14.175419  292708 kubeadm.go:319] 
	I1124 03:16:14.175681  292708 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:16:14.175744  292708 kubeadm.go:319] 
	I1124 03:16:14.175973  292708 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:16:14.176557  292708 kubeadm.go:319] 
	I1124 03:16:14.177407  292708 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:16:14.177497  292708 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:16:14.177571  292708 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:16:14.177592  292708 kubeadm.go:319] 
	I1124 03:16:14.177661  292708 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:16:14.177666  292708 kubeadm.go:319] 
	I1124 03:16:14.177728  292708 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:16:14.177734  292708 kubeadm.go:319] 
	I1124 03:16:14.177798  292708 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:16:14.177884  292708 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:16:14.177972  292708 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:16:14.177981  292708 kubeadm.go:319] 
	I1124 03:16:14.178076  292708 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:16:14.178185  292708 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:16:14.178192  292708 kubeadm.go:319] 
	I1124 03:16:14.178290  292708 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4hgsom.aqbsuq2onqwyj901 \
	I1124 03:16:14.178413  292708 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5e943442c508de754e907135e9f68708045a0a18fa82619a148153bf802a361b \
	I1124 03:16:14.178438  292708 kubeadm.go:319] 	--control-plane 
	I1124 03:16:14.178443  292708 kubeadm.go:319] 
	I1124 03:16:14.178545  292708 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:16:14.178551  292708 kubeadm.go:319] 
	I1124 03:16:14.178655  292708 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4hgsom.aqbsuq2onqwyj901 \
	I1124 03:16:14.178794  292708 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5e943442c508de754e907135e9f68708045a0a18fa82619a148153bf802a361b 
	I1124 03:16:14.182012  292708 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:16:14.182161  292708 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:16:14.182224  292708 cni.go:84] Creating CNI manager for ""
	I1124 03:16:14.182235  292708 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:16:14.183763  292708 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:16:11.967284  296456 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 03:16:11.967306  296456 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 03:16:11.967385  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:11.984720  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:12.003167  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:12.007609  296456 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:16:12.007680  296456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:16:12.007757  296456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-427637
	I1124 03:16:12.016229  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:12.052763  296456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/embed-certs-427637/id_rsa Username:docker}
	I1124 03:16:12.148754  296456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:16:12.166613  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:16:12.166640  296456 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:16:12.170765  296456 node_ready.go:35] waiting up to 6m0s for node "embed-certs-427637" to be "Ready" ...
	I1124 03:16:12.189608  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:16:12.189635  296456 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:16:12.201274  296456 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 03:16:12.201377  296456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 03:16:12.217492  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:16:12.217580  296456 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:16:12.242376  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:16:12.244300  296456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:16:12.255557  296456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:16:12.263107  296456 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 03:16:12.263184  296456 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 03:16:12.272601  296456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:16:12.279273  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:16:12.279343  296456 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:16:12.292246  296456 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:16:12.292346  296456 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 03:16:12.342019  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:16:12.342040  296456 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:16:12.355336  296456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:16:12.390515  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:16:12.390541  296456 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:16:12.408676  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:16:12.408698  296456 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:16:12.430744  296456 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:16:12.430880  296456 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:16:12.460924  296456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:16:13.767512  296456 node_ready.go:49] node "embed-certs-427637" is "Ready"
	I1124 03:16:13.767551  296456 node_ready.go:38] duration metric: took 1.596711222s for node "embed-certs-427637" to be "Ready" ...
	I1124 03:16:13.767567  296456 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:16:13.767618  296456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:16:14.894210  296456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.638417352s)
	I1124 03:16:14.894341  296456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.621577577s)
	I1124 03:16:15.011736  296456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.550746773s)
	I1124 03:16:15.011760  296456 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.244123234s)
	I1124 03:16:15.011802  296456 api_server.go:72] duration metric: took 3.122113037s to wait for apiserver process to appear ...
	I1124 03:16:15.011810  296456 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:16:15.011832  296456 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:16:15.012289  296456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.656919208s)
	I1124 03:16:15.012314  296456 addons.go:495] Verifying addon metrics-server=true in "embed-certs-427637"
	I1124 03:16:15.013558  296456 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-427637 addons enable metrics-server
	
	I1124 03:16:15.014883  296456 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	c09c633c93ab7       56cc512116c8f       9 seconds ago       Running             busybox                   0                   79d48e512afea       busybox                                                default
	47833e056afc1       52546a367cc9e       26 seconds ago      Running             coredns                   0                   22f3e49755732       coredns-66bc5c9577-d78bs                               kube-system
	d5e5ef5586d54       6e38f40d628db       26 seconds ago      Running             storage-provisioner       0                   5762747da2f73       storage-provisioner                                    kube-system
	7beba598dd65a       409467f978b4a       37 seconds ago      Running             kindnet-cni               0                   b2a6fb7a51694       kindnet-b22kj                                          kube-system
	e3f888fa514e5       fc25172553d79       38 seconds ago      Running             kube-proxy                0                   681e0c229dc03       kube-proxy-pdsd5                                       kube-system
	6ab03610fd9a3       c80c8dbafe7dd       49 seconds ago      Running             kube-controller-manager   0                   14d2b509320c1       kube-controller-manager-default-k8s-diff-port-983163   kube-system
	f0dee428c966f       c3994bc696102       49 seconds ago      Running             kube-apiserver            0                   bb403dc0803cb       kube-apiserver-default-k8s-diff-port-983163            kube-system
	9822639bf4a96       7dd6aaa1717ab       49 seconds ago      Running             kube-scheduler            0                   18af15a8467fc       kube-scheduler-default-k8s-diff-port-983163            kube-system
	3499337e0ee82       5f1f5298c888d       49 seconds ago      Running             etcd                      0                   b96a17ac2b1f7       etcd-default-k8s-diff-port-983163                      kube-system
	
	
	==> containerd <==
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.261655059Z" level=info msg="connecting to shim d5e5ef5586d54d7bed7498dc46b356231a0485d014a3808bae84eb7f934910e0" address="unix:///run/containerd/s/6229b7ee0ec68785c5637e85a2337046f991ef509049e734e68d89e51855bde6" protocol=ttrpc version=3
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.284884843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d78bs,Uid:8b371860-34fe-4cb2-99f2-5a6457b82c9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"22f3e497557321424065cdf388e3fd04ebbdd4413e8d56ef62065eae6efcb9ba\""
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.295697506Z" level=info msg="CreateContainer within sandbox \"22f3e497557321424065cdf388e3fd04ebbdd4413e8d56ef62065eae6efcb9ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.304368157Z" level=info msg="Container 47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.311755373Z" level=info msg="CreateContainer within sandbox \"22f3e497557321424065cdf388e3fd04ebbdd4413e8d56ef62065eae6efcb9ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447\""
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.312608335Z" level=info msg="StartContainer for \"47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447\""
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.313911333Z" level=info msg="connecting to shim 47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447" address="unix:///run/containerd/s/ff0dd49adc05ee52dcb6dbc605d86432ab1d75d15c71f69e729cb1debd08edcb" protocol=ttrpc version=3
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.341521656Z" level=info msg="StartContainer for \"d5e5ef5586d54d7bed7498dc46b356231a0485d014a3808bae84eb7f934910e0\" returns successfully"
	Nov 24 03:15:49 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:15:49.398213048Z" level=info msg="StartContainer for \"47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447\" returns successfully"
	Nov 24 03:16:03 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:03.045960320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c58a2189-5a2a-43df-9dab-025a0f79f2aa,Namespace:default,Attempt:0,}"
	Nov 24 03:16:03 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:03.691767810Z" level=info msg="connecting to shim 79d48e512afea324e55747fb32300c0e5933738863ed8cbd424a353c692a1226" address="unix:///run/containerd/s/d92f351532d74ea0e1dbf5ae2507d5c4b7184d5abf49f0bd2827ce2aa85c095f" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:16:03 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:03.891445971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c58a2189-5a2a-43df-9dab-025a0f79f2aa,Namespace:default,Attempt:0,} returns sandbox id \"79d48e512afea324e55747fb32300c0e5933738863ed8cbd424a353c692a1226\""
	Nov 24 03:16:03 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:03.893909843Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.001348684Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.002134396Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.003216399Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.004852596Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.005423580Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.111358374s"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.005468233Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.009289211Z" level=info msg="CreateContainer within sandbox \"79d48e512afea324e55747fb32300c0e5933738863ed8cbd424a353c692a1226\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.015496006Z" level=info msg="Container c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.020504253Z" level=info msg="CreateContainer within sandbox \"79d48e512afea324e55747fb32300c0e5933738863ed8cbd424a353c692a1226\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777\""
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.021140681Z" level=info msg="StartContainer for \"c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777\""
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.022082014Z" level=info msg="connecting to shim c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777" address="unix:///run/containerd/s/d92f351532d74ea0e1dbf5ae2507d5c4b7184d5abf49f0bd2827ce2aa85c095f" protocol=ttrpc version=3
	Nov 24 03:16:06 default-k8s-diff-port-983163 containerd[664]: time="2025-11-24T03:16:06.073910386Z" level=info msg="StartContainer for \"c09c633c93ab7ac72a3cbb8e044127a93555d9f1df029bbe39c22e0111b8a777\" returns successfully"
	
	
	==> coredns [47833e056afc1701cbddfa37311fc0ab1e2f08e117ec8cd728b74fb12a7c6447] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54279 - 56713 "HINFO IN 4735573002917364633.5896329205484484595. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067804178s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-983163
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-983163
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-983163
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_15_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:15:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-983163
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:16:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:15:48 +0000   Mon, 24 Nov 2025 03:15:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:15:48 +0000   Mon, 24 Nov 2025 03:15:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:15:48 +0000   Mon, 24 Nov 2025 03:15:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:15:48 +0000   Mon, 24 Nov 2025 03:15:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-983163
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                ddca803d-d9cd-4899-9051-14cb08d85cbf
	  Boot ID:                    6a444014-1437-4ef5-ba54-cb22d4aebaaf
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-d78bs                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     39s
	  kube-system                 etcd-default-k8s-diff-port-983163                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         45s
	  kube-system                 kindnet-b22kj                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      39s
	  kube-system                 kube-apiserver-default-k8s-diff-port-983163             250m (3%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-983163    200m (2%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-pdsd5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-scheduler-default-k8s-diff-port-983163             100m (1%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 37s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x7 over 50s)  kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  50s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  44s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  44s                kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s                kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s                kubelet          Node default-k8s-diff-port-983163 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node default-k8s-diff-port-983163 event: Registered Node default-k8s-diff-port-983163 in Controller
	  Normal  NodeReady                27s                kubelet          Node default-k8s-diff-port-983163 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001875] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411990] i8042: Warning: Keylock active
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513869] block sda: the capability attribute has been deprecated.
	[  +0.086430] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023975] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.680840] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3499337e0ee82a2f81bd5caa1e79e01cff2507b0698469d50af9736a90b933ca] <==
	{"level":"warn","ts":"2025-11-24T03:15:27.612144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.621839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.626677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.634248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.641151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.647407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.654194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.667970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.675097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.681915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.701258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.710614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.718192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.725606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.732864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.740049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.747860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.753691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.774552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.782610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.790773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:27.851504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:15:29.939992Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.84401ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790224371133139 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:expand-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:expand-controller\" value_size:655 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T03:15:29.940126Z","caller":"traceutil/trace.go:172","msg":"trace[958052545] transaction","detail":"{read_only:false; response_revision:195; number_of_response:1; }","duration":"258.009958ms","start":"2025-11-24T03:15:29.682099Z","end":"2025-11-24T03:15:29.940109Z","steps":["trace[958052545] 'process raft request'  (duration: 120.580305ms)","trace[958052545] 'compare'  (duration: 136.727923ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:16:02.843698Z","caller":"traceutil/trace.go:172","msg":"trace[1361567151] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"161.317271ms","start":"2025-11-24T03:16:02.682359Z","end":"2025-11-24T03:16:02.843676Z","steps":["trace[1361567151] 'process raft request'  (duration: 94.723851ms)","trace[1361567151] 'compare'  (duration: 66.392043ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:16:15 up 58 min,  0 user,  load average: 5.91, 3.56, 2.29
	Linux default-k8s-diff-port-983163 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7beba598dd65a23f5bc047d323f14dd12e71445a729cd6f29e2c587dae089612] <==
	I1124 03:15:38.146695       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:15:38.147026       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 03:15:38.147173       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:15:38.147194       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:15:38.147229       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:15:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:15:38.347877       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:15:38.347936       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:15:38.347951       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:15:38.445680       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:15:38.809263       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:15:38.809320       1 metrics.go:72] Registering metrics
	I1124 03:15:38.809419       1 controller.go:711] "Syncing nftables rules"
	I1124 03:15:48.349906       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:15:48.349963       1 main.go:301] handling current node
	I1124 03:15:58.354849       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:15:58.354918       1 main.go:301] handling current node
	I1124 03:16:08.348894       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:16:08.348935       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f0dee428c966f47fa114a6190b11d31311ef28bf95bd4181a7a3c7cb9ba1b761] <==
	I1124 03:15:28.409312       1 policy_source.go:240] refreshing policies
	E1124 03:15:28.428364       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 03:15:28.474944       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:15:28.482467       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:15:28.482667       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:28.492466       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:28.492522       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:15:28.578188       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:15:29.277952       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:15:29.281838       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:15:29.281854       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:15:30.143216       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:15:30.181819       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:15:30.283064       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:15:30.289135       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 03:15:30.290164       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:15:30.295414       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:15:30.754513       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:15:31.252424       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:15:31.262615       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:15:31.271605       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:15:35.954553       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:35.958479       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:15:36.454042       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:15:36.654642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6ab03610fd9a3c11ed53b9af84684605d2fbe3dac58d8504961f02d59de2827c] <==
	I1124 03:15:35.750760       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 03:15:35.750902       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:15:35.750917       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:15:35.751317       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:15:35.751313       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:15:35.751337       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:15:35.751718       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:15:35.751754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:15:35.751831       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:15:35.751831       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:15:35.752542       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:15:35.752610       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:15:35.753701       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 03:15:35.756029       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:15:35.758169       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:15:35.758200       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:15:35.761807       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:15:35.761878       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:15:35.761922       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:15:35.761930       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:15:35.761937       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:15:35.763504       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:15:35.769414       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-983163" podCIDRs=["10.244.0.0/24"]
	I1124 03:15:35.769525       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:15:50.703444       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e3f888fa514e5254d2bc249c2afa1a07e6a99bf5560622158ceec2cf8f131ca5] <==
	I1124 03:15:37.640628       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:15:37.707656       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:15:37.808168       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:15:37.808215       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 03:15:37.808358       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:15:37.830857       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:15:37.830906       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:15:37.836297       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:15:37.836700       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:15:37.836723       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:15:37.838447       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:15:37.838488       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:15:37.838472       1 config.go:200] "Starting service config controller"
	I1124 03:15:37.838569       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:15:37.838584       1 config.go:309] "Starting node config controller"
	I1124 03:15:37.838594       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:15:37.838602       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:15:37.838543       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:15:37.838627       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:15:37.938688       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:15:37.938722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:15:37.938735       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9822639bf4a960eaa781e0de24d0537229b69860c3f8fd7791731f3453b44446] <==
	E1124 03:15:28.371035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:15:28.371122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:15:28.371134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:15:28.371144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:15:28.371314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:15:28.371452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:15:28.371541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:15:28.371553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:15:28.371802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:15:28.371865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:15:28.371620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:15:28.371912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:15:28.371620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:15:28.372010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:15:29.259965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:15:29.270503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:15:29.308664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:15:29.315844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:15:29.320937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:15:29.395691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:15:29.493423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:15:29.519570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:15:29.548835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:15:29.736014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 03:15:31.565758       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551753    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a704097c-5e9c-472c-a33c-74f3b5555277-kube-proxy\") pod \"kube-proxy-pdsd5\" (UID: \"a704097c-5e9c-472c-a33c-74f3b5555277\") " pod="kube-system/kube-proxy-pdsd5"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551839    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c78dfc1-53c3-4d7d-bac5-a57266e63935-xtables-lock\") pod \"kindnet-b22kj\" (UID: \"9c78dfc1-53c3-4d7d-bac5-a57266e63935\") " pod="kube-system/kindnet-b22kj"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551905    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng5tw\" (UniqueName: \"kubernetes.io/projected/a704097c-5e9c-472c-a33c-74f3b5555277-kube-api-access-ng5tw\") pod \"kube-proxy-pdsd5\" (UID: \"a704097c-5e9c-472c-a33c-74f3b5555277\") " pod="kube-system/kube-proxy-pdsd5"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551938    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a704097c-5e9c-472c-a33c-74f3b5555277-xtables-lock\") pod \"kube-proxy-pdsd5\" (UID: \"a704097c-5e9c-472c-a33c-74f3b5555277\") " pod="kube-system/kube-proxy-pdsd5"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551968    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a704097c-5e9c-472c-a33c-74f3b5555277-lib-modules\") pod \"kube-proxy-pdsd5\" (UID: \"a704097c-5e9c-472c-a33c-74f3b5555277\") " pod="kube-system/kube-proxy-pdsd5"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.551998    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c78dfc1-53c3-4d7d-bac5-a57266e63935-lib-modules\") pod \"kindnet-b22kj\" (UID: \"9c78dfc1-53c3-4d7d-bac5-a57266e63935\") " pod="kube-system/kindnet-b22kj"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.552019    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbc6x\" (UniqueName: \"kubernetes.io/projected/9c78dfc1-53c3-4d7d-bac5-a57266e63935-kube-api-access-pbc6x\") pod \"kindnet-b22kj\" (UID: \"9c78dfc1-53c3-4d7d-bac5-a57266e63935\") " pod="kube-system/kindnet-b22kj"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:36.552065    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9c78dfc1-53c3-4d7d-bac5-a57266e63935-cni-cfg\") pod \"kindnet-b22kj\" (UID: \"9c78dfc1-53c3-4d7d-bac5-a57266e63935\") " pod="kube-system/kindnet-b22kj"
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660361    1458 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660417    1458 projected.go:196] Error preparing data for projected volume kube-api-access-pbc6x for pod kube-system/kindnet-b22kj: configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660379    1458 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660524    1458 projected.go:196] Error preparing data for projected volume kube-api-access-ng5tw for pod kube-system/kube-proxy-pdsd5: configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660532    1458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c78dfc1-53c3-4d7d-bac5-a57266e63935-kube-api-access-pbc6x podName:9c78dfc1-53c3-4d7d-bac5-a57266e63935 nodeName:}" failed. No retries permitted until 2025-11-24 03:15:37.160481339 +0000 UTC m=+6.150081948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pbc6x" (UniqueName: "kubernetes.io/projected/9c78dfc1-53c3-4d7d-bac5-a57266e63935-kube-api-access-pbc6x") pod "kindnet-b22kj" (UID: "9c78dfc1-53c3-4d7d-bac5-a57266e63935") : configmap "kube-root-ca.crt" not found
	Nov 24 03:15:36 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:15:36.660592    1458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a704097c-5e9c-472c-a33c-74f3b5555277-kube-api-access-ng5tw podName:a704097c-5e9c-472c-a33c-74f3b5555277 nodeName:}" failed. No retries permitted until 2025-11-24 03:15:37.160568348 +0000 UTC m=+6.150168955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ng5tw" (UniqueName: "kubernetes.io/projected/a704097c-5e9c-472c-a33c-74f3b5555277-kube-api-access-ng5tw") pod "kube-proxy-pdsd5" (UID: "a704097c-5e9c-472c-a33c-74f3b5555277") : configmap "kube-root-ca.crt" not found
	Nov 24 03:15:38 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:38.169750    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-b22kj" podStartSLOduration=2.169727227 podStartE2EDuration="2.169727227s" podCreationTimestamp="2025-11-24 03:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:38.169328174 +0000 UTC m=+7.158928810" watchObservedRunningTime="2025-11-24 03:15:38.169727227 +0000 UTC m=+7.159327836"
	Nov 24 03:15:40 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:40.669936    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pdsd5" podStartSLOduration=4.669911293 podStartE2EDuration="4.669911293s" podCreationTimestamp="2025-11-24 03:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:38.199213964 +0000 UTC m=+7.188814571" watchObservedRunningTime="2025-11-24 03:15:40.669911293 +0000 UTC m=+9.659511902"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.386021    1458 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.536194    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ktch\" (UniqueName: \"kubernetes.io/projected/2da9e6e3-1153-465b-b308-22562c37e66d-kube-api-access-6ktch\") pod \"storage-provisioner\" (UID: \"2da9e6e3-1153-465b-b308-22562c37e66d\") " pod="kube-system/storage-provisioner"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.536444    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b371860-34fe-4cb2-99f2-5a6457b82c9e-config-volume\") pod \"coredns-66bc5c9577-d78bs\" (UID: \"8b371860-34fe-4cb2-99f2-5a6457b82c9e\") " pod="kube-system/coredns-66bc5c9577-d78bs"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.536618    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z566k\" (UniqueName: \"kubernetes.io/projected/8b371860-34fe-4cb2-99f2-5a6457b82c9e-kube-api-access-z566k\") pod \"coredns-66bc5c9577-d78bs\" (UID: \"8b371860-34fe-4cb2-99f2-5a6457b82c9e\") " pod="kube-system/coredns-66bc5c9577-d78bs"
	Nov 24 03:15:48 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:48.536681    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2da9e6e3-1153-465b-b308-22562c37e66d-tmp\") pod \"storage-provisioner\" (UID: \"2da9e6e3-1153-465b-b308-22562c37e66d\") " pod="kube-system/storage-provisioner"
	Nov 24 03:15:50 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:15:50.213966    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d78bs" podStartSLOduration=14.213937341 podStartE2EDuration="14.213937341s" podCreationTimestamp="2025-11-24 03:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:50.211272549 +0000 UTC m=+19.200873159" watchObservedRunningTime="2025-11-24 03:15:50.213937341 +0000 UTC m=+19.203537950"
	Nov 24 03:16:00 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:16:00.208987    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=23.208963108 podStartE2EDuration="23.208963108s" podCreationTimestamp="2025-11-24 03:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:15:50.238374282 +0000 UTC m=+19.227974891" watchObservedRunningTime="2025-11-24 03:16:00.208963108 +0000 UTC m=+29.198563716"
	Nov 24 03:16:02 default-k8s-diff-port-983163 kubelet[1458]: I1124 03:16:02.725725    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7htjx\" (UniqueName: \"kubernetes.io/projected/c58a2189-5a2a-43df-9dab-025a0f79f2aa-kube-api-access-7htjx\") pod \"busybox\" (UID: \"c58a2189-5a2a-43df-9dab-025a0f79f2aa\") " pod="default/busybox"
	Nov 24 03:16:11 default-k8s-diff-port-983163 kubelet[1458]: E1124 03:16:11.742581    1458 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.103.2:60994->192.168.103.2:10010: write tcp 192.168.103.2:60994->192.168.103.2:10010: write: broken pipe
	
	
	==> storage-provisioner [d5e5ef5586d54d7bed7498dc46b356231a0485d014a3808bae84eb7f934910e0] <==
	W1124 03:15:51.395878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:53.399560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:53.405622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:55.408692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:55.447423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:57.450526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:57.457406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:59.461045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:15:59.465087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:01.468883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:01.473645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:03.476349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:03.543018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:05.546523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:05.551011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:07.554691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:07.558606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:09.562025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:09.568108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:11.575259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:11.585787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:13.591758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:13.601041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:15.604444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:15.609133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-983163 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.31s)

                                                
                                    

Test pass (291/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.36
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 11.23
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.82
22 TestOffline 55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 125.13
29 TestAddons/serial/Volcano 39.23
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.46
35 TestAddons/parallel/Registry 15.66
36 TestAddons/parallel/RegistryCreds 0.63
37 TestAddons/parallel/Ingress 21.4
38 TestAddons/parallel/InspektorGadget 11.73
39 TestAddons/parallel/MetricsServer 5.66
41 TestAddons/parallel/CSI 49.43
42 TestAddons/parallel/Headlamp 16.52
43 TestAddons/parallel/CloudSpanner 5.5
44 TestAddons/parallel/LocalPath 52.64
45 TestAddons/parallel/NvidiaDevicePlugin 5.55
46 TestAddons/parallel/Yakd 11.67
47 TestAddons/parallel/AmdGpuDevicePlugin 5.54
48 TestAddons/StoppedEnableDisable 12.78
49 TestCertOptions 25.32
50 TestCertExpiration 215.47
52 TestForceSystemdFlag 25.73
53 TestForceSystemdEnv 29.03
58 TestErrorSpam/setup 19.37
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 1.45
62 TestErrorSpam/unpause 1.52
63 TestErrorSpam/stop 2.1
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 40.1
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.87
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.71
75 TestFunctional/serial/CacheCmd/cache/add_local 1.9
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 40.44
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.21
86 TestFunctional/serial/LogsFileCmd 1.26
87 TestFunctional/serial/InvalidService 3.94
89 TestFunctional/parallel/ConfigCmd 0.47
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.18
98 TestFunctional/parallel/AddonsCmd 0.14
101 TestFunctional/parallel/SSHCmd 0.55
102 TestFunctional/parallel/CpCmd 1.95
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.68
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
113 TestFunctional/parallel/License 0.29
115 TestFunctional/parallel/MountCmd/any-port 11.7
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
117 TestFunctional/parallel/ProfileCmd/profile_list 0.45
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/MountCmd/specific-port 1.59
125 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/Version/short 0.07
132 TestFunctional/parallel/Version/components 0.48
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
137 TestFunctional/parallel/ImageCommands/ImageBuild 3.21
138 TestFunctional/parallel/ImageCommands/Setup 1.74
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.08
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.04
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.84
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
149 TestFunctional/parallel/ServiceCmd/List 1.71
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 136.19
162 TestMultiControlPlane/serial/DeployApp 5.4
163 TestMultiControlPlane/serial/PingHostFromPods 1.16
164 TestMultiControlPlane/serial/AddWorkerNode 24.84
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
167 TestMultiControlPlane/serial/CopyFile 17.39
168 TestMultiControlPlane/serial/StopSecondaryNode 12.75
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
170 TestMultiControlPlane/serial/RestartSecondaryNode 8.68
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.05
173 TestMultiControlPlane/serial/DeleteSecondaryNode 9.35
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
175 TestMultiControlPlane/serial/StopCluster 36.16
176 TestMultiControlPlane/serial/RestartCluster 56.99
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
178 TestMultiControlPlane/serial/AddSecondaryNode 80.86
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
184 TestJSONOutput/start/Command 38.38
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.72
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.59
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.85
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.23
209 TestKicCustomNetwork/create_custom_network 36.03
210 TestKicCustomNetwork/use_default_bridge_network 23.27
211 TestKicExistingNetwork 26.29
212 TestKicCustomSubnet 23.63
213 TestKicStaticIP 26.35
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 47.89
218 TestMountStart/serial/StartWithMountFirst 7.45
219 TestMountStart/serial/VerifyMountFirst 0.28
220 TestMountStart/serial/StartWithMountSecond 7.51
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.68
223 TestMountStart/serial/VerifyMountPostDelete 0.28
224 TestMountStart/serial/Stop 1.27
225 TestMountStart/serial/RestartStopped 7.61
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 65.36
230 TestMultiNode/serial/DeployApp2Nodes 4.73
231 TestMultiNode/serial/PingHostFrom2Pods 0.79
232 TestMultiNode/serial/AddNode 23.59
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.67
235 TestMultiNode/serial/CopyFile 9.92
236 TestMultiNode/serial/StopNode 2.27
237 TestMultiNode/serial/StartAfterStop 6.91
238 TestMultiNode/serial/RestartKeepsNodes 68.51
239 TestMultiNode/serial/DeleteNode 5.22
240 TestMultiNode/serial/StopMultiNode 24.01
241 TestMultiNode/serial/RestartMultiNode 48.29
242 TestMultiNode/serial/ValidateNameConflict 21.83
247 TestPreload 109.84
249 TestScheduledStopUnix 98.02
252 TestInsufficientStorage 9.15
253 TestRunningBinaryUpgrade 61.47
255 TestKubernetesUpgrade 332.65
256 TestMissingContainerUpgrade 126.91
257 TestStoppedBinaryUpgrade/Setup 2.64
259 TestPause/serial/Start 48.37
260 TestStoppedBinaryUpgrade/Upgrade 99.44
261 TestPause/serial/SecondStartNoReconfiguration 6.9
262 TestPause/serial/Pause 0.76
263 TestPause/serial/VerifyStatus 0.37
264 TestPause/serial/Unpause 0.73
265 TestPause/serial/PauseAgain 0.8
266 TestPause/serial/DeletePaused 2.83
267 TestPause/serial/VerifyDeletedResources 0.59
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.33
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
278 TestNoKubernetes/serial/StartWithK8s 24.55
279 TestNoKubernetes/serial/StartWithStopK8s 22.24
287 TestNetworkPlugins/group/false 3.49
291 TestNoKubernetes/serial/Start 7.54
293 TestStartStop/group/old-k8s-version/serial/FirstStart 50.87
294 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
295 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
296 TestNoKubernetes/serial/ProfileList 45.1
298 TestNoKubernetes/serial/Stop 1.27
299 TestNoKubernetes/serial/StartNoArgs 6.69
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
302 TestStartStop/group/no-preload/serial/FirstStart 53.67
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
304 TestStartStop/group/old-k8s-version/serial/Stop 12.8
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
306 TestStartStop/group/old-k8s-version/serial/SecondStart 48.52
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.84
310 TestStartStop/group/no-preload/serial/Stop 12.06
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
313 TestStartStop/group/old-k8s-version/serial/Pause 2.83
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/no-preload/serial/SecondStart 47.41
317 TestStartStop/group/embed-certs/serial/FirstStart 43.64
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.14
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
324 TestStartStop/group/no-preload/serial/Pause 3.45
325 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
326 TestStartStop/group/embed-certs/serial/Stop 14.39
328 TestStartStop/group/newest-cni/serial/FirstStart 25.75
329 TestNetworkPlugins/group/auto/Start 46.07
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
332 TestStartStop/group/embed-certs/serial/SecondStart 53.95
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
334 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.25
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
337 TestStartStop/group/newest-cni/serial/Stop 1.39
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
339 TestStartStop/group/newest-cni/serial/SecondStart 10.75
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
341 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.93
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
345 TestStartStop/group/newest-cni/serial/Pause 2.8
346 TestNetworkPlugins/group/kindnet/Start 42.79
347 TestNetworkPlugins/group/auto/KubeletFlags 0.35
348 TestNetworkPlugins/group/auto/NetCatPod 8.24
349 TestNetworkPlugins/group/auto/DNS 0.13
350 TestNetworkPlugins/group/auto/Localhost 0.12
351 TestNetworkPlugins/group/auto/HairPin 0.11
352 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
354 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
355 TestStartStop/group/embed-certs/serial/Pause 2.95
356 TestNetworkPlugins/group/calico/Start 52.36
357 TestNetworkPlugins/group/custom-flannel/Start 57.87
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
362 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.42
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
364 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
365 TestNetworkPlugins/group/enable-default-cni/Start 68.1
366 TestNetworkPlugins/group/kindnet/DNS 0.16
367 TestNetworkPlugins/group/kindnet/Localhost 0.14
368 TestNetworkPlugins/group/kindnet/HairPin 0.13
369 TestNetworkPlugins/group/flannel/Start 52.55
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/calico/KubeletFlags 0.35
372 TestNetworkPlugins/group/calico/NetCatPod 9.18
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
375 TestNetworkPlugins/group/calico/DNS 0.15
376 TestNetworkPlugins/group/calico/Localhost 0.13
377 TestNetworkPlugins/group/calico/HairPin 0.12
378 TestNetworkPlugins/group/custom-flannel/DNS 0.13
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.24
383 TestNetworkPlugins/group/bridge/Start 37.51
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
389 TestNetworkPlugins/group/flannel/NetCatPod 9.2
390 TestNetworkPlugins/group/flannel/DNS 0.14
391 TestNetworkPlugins/group/flannel/Localhost 0.12
392 TestNetworkPlugins/group/flannel/HairPin 0.11
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
394 TestNetworkPlugins/group/bridge/NetCatPod 9.19
395 TestNetworkPlugins/group/bridge/DNS 0.13
396 TestNetworkPlugins/group/bridge/Localhost 0.1
397 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.28.0/json-events (12.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-212275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-212275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.361303827s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 02:24:27.673978    8429 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1124 02:24:27.674059    8429 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-212275
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-212275: exit status 85 (73.045876ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-212275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-212275 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:24:15
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:24:15.367406    8441 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:24:15.367651    8441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:15.367661    8441 out.go:374] Setting ErrFile to fd 2...
	I1124 02:24:15.367667    8441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:15.367868    8441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	W1124 02:24:15.368021    8441 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21975-4883/.minikube/config/config.json: open /home/jenkins/minikube-integration/21975-4883/.minikube/config/config.json: no such file or directory
	I1124 02:24:15.368529    8441 out.go:368] Setting JSON to true
	I1124 02:24:15.369450    8441 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":398,"bootTime":1763950657,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:24:15.369503    8441 start.go:143] virtualization: kvm guest
	I1124 02:24:15.374030    8441 out.go:99] [download-only-212275] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1124 02:24:15.374174    8441 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 02:24:15.374246    8441 notify.go:221] Checking for updates...
	I1124 02:24:15.375375    8441 out.go:171] MINIKUBE_LOCATION=21975
	I1124 02:24:15.376512    8441 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:24:15.377771    8441 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:24:15.379113    8441 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 02:24:15.380169    8441 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 02:24:15.382198    8441 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 02:24:15.382431    8441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:24:15.408081    8441 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:24:15.408159    8441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:15.780533    8441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 02:24:15.770139473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:15.780635    8441 docker.go:319] overlay module found
	I1124 02:24:15.782202    8441 out.go:99] Using the docker driver based on user configuration
	I1124 02:24:15.782230    8441 start.go:309] selected driver: docker
	I1124 02:24:15.782237    8441 start.go:927] validating driver "docker" against <nil>
	I1124 02:24:15.782322    8441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:15.842281    8441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 02:24:15.832897895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:15.842480    8441 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:24:15.843058    8441 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 02:24:15.843251    8441 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 02:24:15.845174    8441 out.go:171] Using Docker driver with root privileges
	I1124 02:24:15.846325    8441 cni.go:84] Creating CNI manager for ""
	I1124 02:24:15.846396    8441 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:24:15.846408    8441 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 02:24:15.846469    8441 start.go:353] cluster config:
	{Name:download-only-212275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-212275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:24:15.847750    8441 out.go:99] Starting "download-only-212275" primary control-plane node in "download-only-212275" cluster
	I1124 02:24:15.847771    8441 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 02:24:15.849053    8441 out.go:99] Pulling base image v0.0.48-1763935653-21975 ...
	I1124 02:24:15.849120    8441 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 02:24:15.849219    8441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 02:24:15.866055    8441 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 02:24:15.866232    8441 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 02:24:15.866351    8441 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 02:24:15.941586    8441 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1124 02:24:15.941613    8441 cache.go:65] Caching tarball of preloaded images
	I1124 02:24:15.941807    8441 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 02:24:15.943518    8441 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 02:24:15.943541    8441 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1124 02:24:16.039559    8441 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1124 02:24:16.039700    8441 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1124 02:24:21.956480    8441 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	
	
	* The control-plane node download-only-212275 host does not exist
	  To start a cluster, run: "minikube start -p download-only-212275"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-212275
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-843554 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-843554 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.230166615s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 02:24:39.342359    8429 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1124 02:24:39.342406    8429 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-843554
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-843554: exit status 85 (71.377509ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-212275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-212275 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ delete  │ -p download-only-212275                                                                                                                                                               │ download-only-212275 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ start   │ -o=json --download-only -p download-only-843554 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-843554 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:24:28
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:24:28.161406    8815 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:24:28.161647    8815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:28.161656    8815 out.go:374] Setting ErrFile to fd 2...
	I1124 02:24:28.161660    8815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:28.161861    8815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:24:28.162278    8815 out.go:368] Setting JSON to true
	I1124 02:24:28.162996    8815 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":411,"bootTime":1763950657,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:24:28.163043    8815 start.go:143] virtualization: kvm guest
	I1124 02:24:28.165036    8815 out.go:99] [download-only-843554] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:24:28.165153    8815 notify.go:221] Checking for updates...
	I1124 02:24:28.166337    8815 out.go:171] MINIKUBE_LOCATION=21975
	I1124 02:24:28.167591    8815 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:24:28.169001    8815 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:24:28.170234    8815 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 02:24:28.171561    8815 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 02:24:28.173850    8815 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 02:24:28.174079    8815 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:24:28.196064    8815 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:24:28.196150    8815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:28.253239    8815 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 02:24:28.243322835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:28.253381    8815 docker.go:319] overlay module found
	I1124 02:24:28.254953    8815 out.go:99] Using the docker driver based on user configuration
	I1124 02:24:28.254984    8815 start.go:309] selected driver: docker
	I1124 02:24:28.254992    8815 start.go:927] validating driver "docker" against <nil>
	I1124 02:24:28.255086    8815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:28.316335    8815 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 02:24:28.305975588 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:28.316514    8815 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:24:28.317069    8815 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 02:24:28.317250    8815 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 02:24:28.318966    8815 out.go:171] Using Docker driver with root privileges
	I1124 02:24:28.320202    8815 cni.go:84] Creating CNI manager for ""
	I1124 02:24:28.320260    8815 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:24:28.320269    8815 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 02:24:28.320324    8815 start.go:353] cluster config:
	{Name:download-only-843554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-843554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:24:28.321670    8815 out.go:99] Starting "download-only-843554" primary control-plane node in "download-only-843554" cluster
	I1124 02:24:28.321700    8815 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 02:24:28.322836    8815 out.go:99] Pulling base image v0.0.48-1763935653-21975 ...
	I1124 02:24:28.322877    8815 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 02:24:28.322957    8815 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 02:24:28.338981    8815 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 02:24:28.339086    8815 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 02:24:28.339101    8815 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory, skipping pull
	I1124 02:24:28.339106    8815 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in cache, skipping pull
	I1124 02:24:28.339114    8815 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	I1124 02:24:28.668216    8815 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 02:24:28.668273    8815 cache.go:65] Caching tarball of preloaded images
	I1124 02:24:28.668471    8815 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 02:24:28.670452    8815 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1124 02:24:28.670488    8815 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1124 02:24:28.767165    8815 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1124 02:24:28.767207    8815 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21975-4883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-843554 host does not exist
	  To start a cluster, run: "minikube start -p download-only-843554"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-843554
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-102283 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-102283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-102283
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 02:24:40.488419    8429 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-803731 --alsologtostderr --binary-mirror http://127.0.0.1:43101 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-803731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-803731
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (55s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-383075 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-383075 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (47.996890957s)
helpers_test.go:175: Cleaning up "offline-containerd-383075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-383075
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-383075: (7.000620671s)
--- PASS: TestOffline (55.00s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-982350
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-982350: exit status 85 (66.340688ms)

                                                
                                                
-- stdout --
	* Profile "addons-982350" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-982350"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-982350
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-982350: exit status 85 (67.542882ms)

                                                
                                                
-- stdout --
	* Profile "addons-982350" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-982350"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (125.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-982350 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-982350 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m5.132854152s)
--- PASS: TestAddons/Setup (125.13s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.23s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 14.956284ms
addons_test.go:876: volcano-admission stabilized in 15.010148ms
addons_test.go:868: volcano-scheduler stabilized in 15.047301ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-pr7lf" [4886de24-25b0-4cc7-a3a2-2823979a4148] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003465695s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-sq2qr" [27ca81a6-1527-4796-afcb-2b930a08ffb5] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003174688s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-vktqq" [8b49ec27-dfa5-4313-b549-352d5649314e] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00411196s
addons_test.go:903: (dbg) Run:  kubectl --context addons-982350 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-982350 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-982350 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [6d3b27f1-feb9-43cc-b234-5b26369cd1e9] Pending
helpers_test.go:352: "test-job-nginx-0" [6d3b27f1-feb9-43cc-b234-5b26369cd1e9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [6d3b27f1-feb9-43cc-b234-5b26369cd1e9] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003644189s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982350 addons disable volcano --alsologtostderr -v=1: (11.882311264s)
--- PASS: TestAddons/serial/Volcano (39.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-982350 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-982350 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-982350 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-982350 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [aecb3653-ab5d-458d-85d2-885193f47262] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [aecb3653-ab5d-458d-85d2-885193f47262] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003096359s
addons_test.go:694: (dbg) Run:  kubectl --context addons-982350 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-982350 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-982350 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.715159ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-pgrmq" [18b58a03-bfea-4bae-9549-ba2002fd6962] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005950857s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-gp29l" [d9caa95a-418a-480f-a57e-625d567b6be1] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004390191s
addons_test.go:392: (dbg) Run:  kubectl --context addons-982350 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-982350 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-982350 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.83229695s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 ip
2025/11/24 02:27:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.66s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.63s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.880055ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-982350
addons_test.go:332: (dbg) Run:  kubectl --context addons-982350 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.63s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-982350 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-982350 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-982350 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a1597710-6abd-48ec-881d-c1414839cb5f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a1597710-6abd-48ec-881d-c1414839cb5f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.002935462s
I1124 02:28:12.861754    8429 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-982350 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982350 addons disable ingress-dns --alsologtostderr -v=1: (1.42946322s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982350 addons disable ingress --alsologtostderr -v=1: (7.709421823s)
--- PASS: TestAddons/parallel/Ingress (21.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qblmx" [97b65cf8-1f49-43b3-9ca4-7794e5537c6a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003528333s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982350 addons disable inspektor-gadget --alsologtostderr -v=1: (5.720596673s)
--- PASS: TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.496051ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-lt4sj" [b83d8228-1c3e-42f7-a9fa-f9de5d68140c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0032686s
addons_test.go:463: (dbg) Run:  kubectl --context addons-982350 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 02:27:55.434762    8429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 02:27:55.438534    8429 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 02:27:55.438561    8429 kapi.go:107] duration metric: took 3.822022ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.832378ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-982350 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-982350 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [74e393e9-ad0e-41f8-a6a6-5e731ba2d292] Pending
helpers_test.go:352: "task-pv-pod" [74e393e9-ad0e-41f8-a6a6-5e731ba2d292] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [74e393e9-ad0e-41f8-a6a6-5e731ba2d292] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003601248s
addons_test.go:572: (dbg) Run:  kubectl --context addons-982350 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-982350 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-982350 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-982350 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-982350 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-982350 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-982350 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [2215ceda-62df-4124-ae51-83d643a77fad] Pending
helpers_test.go:352: "task-pv-pod-restore" [2215ceda-62df-4124-ae51-83d643a77fad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [2215ceda-62df-4124-ae51-83d643a77fad] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002774707s
addons_test.go:614: (dbg) Run:  kubectl --context addons-982350 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-982350 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-982350 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982350 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.540208887s)
--- PASS: TestAddons/parallel/CSI (49.43s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-982350 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-8mkrm" [161e3c9a-7512-464a-aa91-4ab86336b0c7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-8mkrm" [161e3c9a-7512-464a-aa91-4ab86336b0c7] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003713574s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982350 addons disable headlamp --alsologtostderr -v=1: (5.738715796s)
--- PASS: TestAddons/parallel/Headlamp (16.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-wb7sg" [733361a1-e8f4-4830-9037-7fd0e2e1e517] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003729905s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-982350 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-982350 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-982350 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [ad99a321-c84b-4806-97f9-6cb4ec0e4366] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [ad99a321-c84b-4806-97f9-6cb4ec0e4366] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [ad99a321-c84b-4806-97f9-6cb4ec0e4366] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002676789s
addons_test.go:967: (dbg) Run:  kubectl --context addons-982350 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 ssh "cat /opt/local-path-provisioner/pvc-e10810fd-af61-4198-96e5-3f409eec7e8a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-982350 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-982350 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982350 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.740265709s)
--- PASS: TestAddons/parallel/LocalPath (52.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-cltlt" [a828391c-61ee-4b9d-b322-a5403dfb3b82] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.014250992s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-rpwz7" [635ca8c8-ed2c-4217-ad58-95efe1fd8ed0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002829666s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-982350 addons disable yakd --alsologtostderr -v=1: (5.663114952s)
--- PASS: TestAddons/parallel/Yakd (11.67s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-t8ng4" [4a433f95-ba4c-4e34-8240-8076ef99bc4d] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.01326378s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-982350 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-982350
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-982350: (12.492946533s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-982350
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-982350
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-982350
--- PASS: TestAddons/StoppedEnableDisable (12.78s)

                                                
                                    
x
+
TestCertOptions (25.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-070637 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-070637 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (22.218893355s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-070637 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-070637 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-070637 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-070637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-070637
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-070637: (2.4418559s)
--- PASS: TestCertOptions (25.32s)

                                                
                                    
x
+
TestCertExpiration (215.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-004045 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-004045 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (27.532960945s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-004045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-004045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.114758822s)
helpers_test.go:175: Cleaning up "cert-expiration-004045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-004045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-004045: (2.817006025s)
--- PASS: TestCertExpiration (215.47s)

                                                
                                    
x
+
TestForceSystemdFlag (25.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-031492 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-031492 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.143649779s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-031492 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-031492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-031492
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-031492: (3.266061362s)
--- PASS: TestForceSystemdFlag (25.73s)

                                                
                                    
x
+
TestForceSystemdEnv (29.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-654027 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-654027 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.21050231s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-654027 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-654027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-654027
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-654027: (2.489791043s)
--- PASS: TestForceSystemdEnv (29.03s)

                                                
                                    
x
+
TestErrorSpam/setup (19.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-272786 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-272786 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-272786 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-272786 --driver=docker  --container-runtime=containerd: (19.368505769s)
--- PASS: TestErrorSpam/setup (19.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (2.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 stop: (1.892738577s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-272786 --log_dir /tmp/nospam-272786 stop
--- PASS: TestErrorSpam/stop (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21975-4883/.minikube/files/etc/test/nested/copy/8429/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524458 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-524458 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (40.095286318s)
--- PASS: TestFunctional/serial/StartWithProxy (40.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 02:31:03.931695    8429 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524458 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-524458 --alsologtostderr -v=8: (5.869438179s)
functional_test.go:678: soft start took 5.870236086s for "functional-524458" cluster.
I1124 02:31:09.801610    8429 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-524458 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-524458 /tmp/TestFunctionalserialCacheCmdcacheadd_local4591341/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 cache add minikube-local-cache-test:functional-524458
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 cache add minikube-local-cache-test:functional-524458: (1.552921556s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 cache delete minikube-local-cache-test:functional-524458
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-524458
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.983275ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 kubectl -- --context functional-524458 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-524458 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524458 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 02:31:46.515983    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:46.522460    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:46.533918    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:46.555368    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:46.596857    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:46.678381    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:46.840035    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:47.161808    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:47.803908    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:49.085496    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:51.647428    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:56.769872    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-524458 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.442502067s)
functional_test.go:776: restart took 40.442618136s for "functional-524458" cluster.
I1124 02:31:57.311029    8429 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (40.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-524458 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 logs: (1.209160974s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 logs --file /tmp/TestFunctionalserialLogsFileCmd3723852163/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 logs --file /tmp/TestFunctionalserialLogsFileCmd3723852163/001/logs.txt: (1.256713003s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-524458 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-524458
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-524458: exit status 115 (359.258858ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32222 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-524458 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 config get cpus: exit status 14 (93.045952ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 config get cpus: exit status 14 (75.956517ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524458 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-524458 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (199.733724ms)

                                                
                                                
-- stdout --
	* [functional-524458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:32:04.515984   49688 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:04.516119   49688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.516130   49688 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:04.516136   49688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.516404   49688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:32:04.516959   49688 out.go:368] Setting JSON to false
	I1124 02:32:04.517991   49688 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":868,"bootTime":1763950657,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:32:04.518075   49688 start.go:143] virtualization: kvm guest
	I1124 02:32:04.519835   49688 out.go:179] * [functional-524458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:32:04.522003   49688 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:32:04.522009   49688 notify.go:221] Checking for updates...
	I1124 02:32:04.524306   49688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:32:04.526472   49688 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:32:04.527492   49688 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 02:32:04.528664   49688 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:32:04.530011   49688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:32:04.531579   49688 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:32:04.532274   49688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:32:04.561292   49688 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:32:04.561420   49688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.631816   49688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.619941292 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.631951   49688 docker.go:319] overlay module found
	I1124 02:32:04.635919   49688 out.go:179] * Using the docker driver based on existing profile
	I1124 02:32:04.637049   49688 start.go:309] selected driver: docker
	I1124 02:32:04.637067   49688 start.go:927] validating driver "docker" against &{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.637180   49688 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:32:04.639503   49688 out.go:203] 
	W1124 02:32:04.640746   49688 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 02:32:04.641957   49688 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524458 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524458 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-524458 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (196.94578ms)

                                                
                                                
-- stdout --
	* [functional-524458] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:32:04.331598   49501 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:04.331711   49501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.331725   49501 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:04.331732   49501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:04.332182   49501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:32:04.332851   49501 out.go:368] Setting JSON to false
	I1124 02:32:04.334260   49501 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":867,"bootTime":1763950657,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:32:04.334349   49501 start.go:143] virtualization: kvm guest
	I1124 02:32:04.336217   49501 out.go:179] * [functional-524458] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 02:32:04.337837   49501 notify.go:221] Checking for updates...
	I1124 02:32:04.337902   49501 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:32:04.339020   49501 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:32:04.340210   49501 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 02:32:04.341306   49501 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 02:32:04.342485   49501 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:32:04.347754   49501 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:32:04.349423   49501 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:32:04.349961   49501 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:32:04.375170   49501 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:32:04.375270   49501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:04.435028   49501 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:04.424320305 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:04.435151   49501 docker.go:319] overlay module found
	I1124 02:32:04.436670   49501 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 02:32:04.437995   49501 start.go:309] selected driver: docker
	I1124 02:32:04.438009   49501 start.go:927] validating driver "docker" against &{Name:functional-524458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-524458 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:04.438134   49501 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:32:04.439973   49501 out.go:203] 
	W1124 02:32:04.441091   49501 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 02:32:04.442196   49501 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh -n functional-524458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 cp functional-524458:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2000119791/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh -n functional-524458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh -n functional-524458 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8429/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo cat /etc/test/nested/copy/8429/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8429.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo cat /etc/ssl/certs/8429.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8429.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo cat /usr/share/ca-certificates/8429.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/84292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo cat /etc/ssl/certs/84292.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/84292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo cat /usr/share/ca-certificates/84292.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-524458 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 ssh "sudo systemctl is-active docker": exit status 1 (290.355183ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 ssh "sudo systemctl is-active crio": exit status 1 (281.342045ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdany-port3391363883/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763951524960886758" to /tmp/TestFunctionalparallelMountCmdany-port3391363883/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763951524960886758" to /tmp/TestFunctionalparallelMountCmdany-port3391363883/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763951524960886758" to /tmp/TestFunctionalparallelMountCmdany-port3391363883/001/test-1763951524960886758
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (315.808971ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 02:32:05.277067    8429 retry.go:31] will retry after 254.013949ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 02:32 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 02:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 02:32 test-1763951524960886758
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh cat /mount-9p/test-1763951524960886758
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-524458 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [419076af-464d-4e5d-94a3-3031c2034f93] Pending
helpers_test.go:352: "busybox-mount" [419076af-464d-4e5d-94a3-3031c2034f93] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [419076af-464d-4e5d-94a3-3031c2034f93] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [419076af-464d-4e5d-94a3-3031c2034f93] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003682917s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-524458 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdany-port3391363883/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "387.394224ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.769759ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
E1124 02:32:07.011548    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1381: Took "357.244806ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.31869ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-524458 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-524458 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-524458 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-524458 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 52240: os: process already finished
helpers_test.go:519: unable to terminate pid 52058: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-524458 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdspecific-port1198358865/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (293.54626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 02:32:16.951667    8429 retry.go:31] will retry after 257.731742ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdspecific-port1198358865/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 ssh "sudo umount -f /mount-9p": exit status 1 (275.220147ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-524458 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdspecific-port1198358865/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1264968832/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1264968832/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1264968832/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T" /mount1: exit status 1 (350.798729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 02:32:18.600486    8429 retry.go:31] will retry after 445.857929ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-524458 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1264968832/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1264968832/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524458 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1264968832/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-524458 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-524458 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-524458
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-524458
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524458 image ls --format short --alsologtostderr:
I1124 02:38:29.480245   60559 out.go:360] Setting OutFile to fd 1 ...
I1124 02:38:29.480397   60559 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:29.480408   60559 out.go:374] Setting ErrFile to fd 2...
I1124 02:38:29.480412   60559 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:29.480654   60559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
I1124 02:38:29.481366   60559 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:29.481463   60559 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:29.481870   60559 cli_runner.go:164] Run: docker container inspect functional-524458 --format={{.State.Status}}
I1124 02:38:29.500127   60559 ssh_runner.go:195] Run: systemctl --version
I1124 02:38:29.500184   60559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-524458
I1124 02:38:29.517071   60559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/functional-524458/id_rsa Username:docker}
I1124 02:38:29.615640   60559 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-524458 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ localhost/my-image                          │ functional-524458  │ sha256:f7108d │ 775kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-524458  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-524458  │ sha256:c5f9c9 │ 992B   │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524458 image ls --format table --alsologtostderr:
I1124 02:38:33.369696   61076 out.go:360] Setting OutFile to fd 1 ...
I1124 02:38:33.369878   61076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:33.369887   61076 out.go:374] Setting ErrFile to fd 2...
I1124 02:38:33.369892   61076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:33.370104   61076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
I1124 02:38:33.370639   61076 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:33.370730   61076 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:33.371135   61076 cli_runner.go:164] Run: docker container inspect functional-524458 --format={{.State.Status}}
I1124 02:38:33.389202   61076 ssh_runner.go:195] Run: systemctl --version
I1124 02:38:33.389254   61076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-524458
I1124 02:38:33.407070   61076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/functional-524458/id_rsa Username:docker}
I1124 02:38:33.505644   61076 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-524458 image ls --format json --alsologtostderr:
[{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-524458"],"size":"2372971"},{"id":"sha256:c5f9c96147a6551bb3fec3f6fd6fe72cbe92589607e753969f33673691f9db15","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-524458"],"size":"992"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/core
dns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f51791
53fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:f7108dcaddc08d3333ef1dcc13fbd6f5683dc41ad7b4c012c1fa8fd013eec4b3","repoDigests":[],"repoTags":["localhost/my-image:functional-524458"],"size":"774887"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d63
7f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524458 image ls --format json --alsologtostderr:
I1124 02:38:33.144887   61021 out.go:360] Setting OutFile to fd 1 ...
I1124 02:38:33.144971   61021 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:33.144975   61021 out.go:374] Setting ErrFile to fd 2...
I1124 02:38:33.144979   61021 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:33.145174   61021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
I1124 02:38:33.145688   61021 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:33.145792   61021 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:33.146186   61021 cli_runner.go:164] Run: docker container inspect functional-524458 --format={{.State.Status}}
I1124 02:38:33.164952   61021 ssh_runner.go:195] Run: systemctl --version
I1124 02:38:33.165000   61021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-524458
I1124 02:38:33.183031   61021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/functional-524458/id_rsa Username:docker}
I1124 02:38:33.281472   61021 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-524458 image ls --format yaml --alsologtostderr:
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-524458
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:c5f9c96147a6551bb3fec3f6fd6fe72cbe92589607e753969f33673691f9db15
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-524458
size: "992"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524458 image ls --format yaml --alsologtostderr:
I1124 02:38:29.705142   60613 out.go:360] Setting OutFile to fd 1 ...
I1124 02:38:29.705260   60613 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:29.705272   60613 out.go:374] Setting ErrFile to fd 2...
I1124 02:38:29.705276   60613 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:29.705452   60613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
I1124 02:38:29.706011   60613 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:29.706101   60613 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:29.706495   60613 cli_runner.go:164] Run: docker container inspect functional-524458 --format={{.State.Status}}
I1124 02:38:29.725852   60613 ssh_runner.go:195] Run: systemctl --version
I1124 02:38:29.725902   60613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-524458
I1124 02:38:29.743803   60613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/functional-524458/id_rsa Username:docker}
I1124 02:38:29.842415   60613 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524458 ssh pgrep buildkitd: exit status 1 (270.425626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image build -t localhost/my-image:functional-524458 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 image build -t localhost/my-image:functional-524458 testdata/build --alsologtostderr: (2.714016694s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524458 image build -t localhost/my-image:functional-524458 testdata/build --alsologtostderr:
I1124 02:38:30.204641   60789 out.go:360] Setting OutFile to fd 1 ...
I1124 02:38:30.204947   60789 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:30.204957   60789 out.go:374] Setting ErrFile to fd 2...
I1124 02:38:30.204961   60789 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:38:30.205147   60789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
I1124 02:38:30.205693   60789 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:30.206325   60789 config.go:182] Loaded profile config "functional-524458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 02:38:30.206728   60789 cli_runner.go:164] Run: docker container inspect functional-524458 --format={{.State.Status}}
I1124 02:38:30.225069   60789 ssh_runner.go:195] Run: systemctl --version
I1124 02:38:30.225118   60789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-524458
I1124 02:38:30.243094   60789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/functional-524458/id_rsa Username:docker}
I1124 02:38:30.341501   60789 build_images.go:162] Building image from path: /tmp/build.1230890438.tar
I1124 02:38:30.341558   60789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 02:38:30.349640   60789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1230890438.tar
I1124 02:38:30.353351   60789 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1230890438.tar: stat -c "%s %y" /var/lib/minikube/build/build.1230890438.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1230890438.tar': No such file or directory
I1124 02:38:30.353380   60789 ssh_runner.go:362] scp /tmp/build.1230890438.tar --> /var/lib/minikube/build/build.1230890438.tar (3072 bytes)
I1124 02:38:30.371123   60789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1230890438
I1124 02:38:30.379292   60789 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1230890438 -xf /var/lib/minikube/build/build.1230890438.tar
I1124 02:38:30.387617   60789 containerd.go:394] Building image: /var/lib/minikube/build/build.1230890438
I1124 02:38:30.387719   60789 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1230890438 --local dockerfile=/var/lib/minikube/build/build.1230890438 --output type=image,name=localhost/my-image:functional-524458
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a428eb58e33c3eb418349dee0308451ec16c3e3ea256b1a86a4f28149a4b98dc done
#8 exporting config sha256:f7108dcaddc08d3333ef1dcc13fbd6f5683dc41ad7b4c012c1fa8fd013eec4b3 done
#8 naming to localhost/my-image:functional-524458
#8 naming to localhost/my-image:functional-524458 done
#8 DONE 0.1s
I1124 02:38:32.836980   60789 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1230890438 --local dockerfile=/var/lib/minikube/build/build.1230890438 --output type=image,name=localhost/my-image:functional-524458: (2.449226711s)
I1124 02:38:32.837046   60789 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1230890438
I1124 02:38:32.847004   60789 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1230890438.tar
I1124 02:38:32.855123   60789 build_images.go:218] Built localhost/my-image:functional-524458 from /tmp/build.1230890438.tar
I1124 02:38:32.855154   60789 build_images.go:134] succeeded building to: functional-524458
I1124 02:38:32.855160   60789 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.714283194s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-524458
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image load --daemon kicbase/echo-server:functional-524458 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image load --daemon kicbase/echo-server:functional-524458 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-524458
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image load --daemon kicbase/echo-server:functional-524458 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image save kicbase/echo-server:functional-524458 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image rm kicbase/echo-server:functional-524458 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-524458
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 image save --daemon kicbase/echo-server:functional-524458 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-524458
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 update-context --alsologtostderr -v=2
E1124 02:41:46.515277    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 service list: (1.710274147s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-524458 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-524458 service list -o json: (1.709701404s)
functional_test.go:1504: Took "1.709802497s" to run "out/minikube-linux-amd64 -p functional-524458 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-524458
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-524458
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-524458
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (136.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m15.457580729s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (136.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 kubectl -- rollout status deployment/busybox: (3.337981453s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-q8t24 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-r4s9n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-vbwlg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-q8t24 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-r4s9n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-vbwlg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-q8t24 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-r4s9n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-vbwlg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-q8t24 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-q8t24 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-r4s9n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-r4s9n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-vbwlg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 kubectl -- exec busybox-7b57f96db7-vbwlg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 node add --alsologtostderr -v 5: (23.929751008s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-167965 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp testdata/cp-test.txt ha-167965:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile369987354/001/cp-test_ha-167965.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965:/home/docker/cp-test.txt ha-167965-m02:/home/docker/cp-test_ha-167965_ha-167965-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m02 "sudo cat /home/docker/cp-test_ha-167965_ha-167965-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965:/home/docker/cp-test.txt ha-167965-m03:/home/docker/cp-test_ha-167965_ha-167965-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m03 "sudo cat /home/docker/cp-test_ha-167965_ha-167965-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965:/home/docker/cp-test.txt ha-167965-m04:/home/docker/cp-test_ha-167965_ha-167965-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m04 "sudo cat /home/docker/cp-test_ha-167965_ha-167965-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp testdata/cp-test.txt ha-167965-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile369987354/001/cp-test_ha-167965-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m02:/home/docker/cp-test.txt ha-167965:/home/docker/cp-test_ha-167965-m02_ha-167965.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965 "sudo cat /home/docker/cp-test_ha-167965-m02_ha-167965.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m02:/home/docker/cp-test.txt ha-167965-m03:/home/docker/cp-test_ha-167965-m02_ha-167965-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m03 "sudo cat /home/docker/cp-test_ha-167965-m02_ha-167965-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m02:/home/docker/cp-test.txt ha-167965-m04:/home/docker/cp-test_ha-167965-m02_ha-167965-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m04 "sudo cat /home/docker/cp-test_ha-167965-m02_ha-167965-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp testdata/cp-test.txt ha-167965-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile369987354/001/cp-test_ha-167965-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m03:/home/docker/cp-test.txt ha-167965:/home/docker/cp-test_ha-167965-m03_ha-167965.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965 "sudo cat /home/docker/cp-test_ha-167965-m03_ha-167965.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m03:/home/docker/cp-test.txt ha-167965-m02:/home/docker/cp-test_ha-167965-m03_ha-167965-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m02 "sudo cat /home/docker/cp-test_ha-167965-m03_ha-167965-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m03:/home/docker/cp-test.txt ha-167965-m04:/home/docker/cp-test_ha-167965-m03_ha-167965-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m04 "sudo cat /home/docker/cp-test_ha-167965-m03_ha-167965-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp testdata/cp-test.txt ha-167965-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile369987354/001/cp-test_ha-167965-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m04:/home/docker/cp-test.txt ha-167965:/home/docker/cp-test_ha-167965-m04_ha-167965.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965 "sudo cat /home/docker/cp-test_ha-167965-m04_ha-167965.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m04:/home/docker/cp-test.txt ha-167965-m02:/home/docker/cp-test_ha-167965-m04_ha-167965-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m02 "sudo cat /home/docker/cp-test_ha-167965-m04_ha-167965-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 cp ha-167965-m04:/home/docker/cp-test.txt ha-167965-m03:/home/docker/cp-test_ha-167965-m04_ha-167965-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 ssh -n ha-167965-m03 "sudo cat /home/docker/cp-test_ha-167965-m04_ha-167965-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 node stop m02 --alsologtostderr -v 5: (12.033576781s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5: exit status 7 (711.833234ms)

                                                
                                                
-- stdout --
	ha-167965
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-167965-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-167965-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-167965-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:51:38.015613   87308 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:51:38.015876   87308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:51:38.015884   87308 out.go:374] Setting ErrFile to fd 2...
	I1124 02:51:38.015888   87308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:51:38.016122   87308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:51:38.016281   87308 out.go:368] Setting JSON to false
	I1124 02:51:38.016307   87308 mustload.go:66] Loading cluster: ha-167965
	I1124 02:51:38.016443   87308 notify.go:221] Checking for updates...
	I1124 02:51:38.016663   87308 config.go:182] Loaded profile config "ha-167965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:51:38.016679   87308 status.go:174] checking status of ha-167965 ...
	I1124 02:51:38.017099   87308 cli_runner.go:164] Run: docker container inspect ha-167965 --format={{.State.Status}}
	I1124 02:51:38.036610   87308 status.go:371] ha-167965 host status = "Running" (err=<nil>)
	I1124 02:51:38.036632   87308 host.go:66] Checking if "ha-167965" exists ...
	I1124 02:51:38.036919   87308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-167965
	I1124 02:51:38.056481   87308 host.go:66] Checking if "ha-167965" exists ...
	I1124 02:51:38.056878   87308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:51:38.056922   87308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-167965
	I1124 02:51:38.075163   87308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/ha-167965/id_rsa Username:docker}
	I1124 02:51:38.172292   87308 ssh_runner.go:195] Run: systemctl --version
	I1124 02:51:38.178697   87308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:51:38.191381   87308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:51:38.253230   87308 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 02:51:38.242381615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:51:38.253703   87308 kubeconfig.go:125] found "ha-167965" server: "https://192.168.49.254:8443"
	I1124 02:51:38.253727   87308 api_server.go:166] Checking apiserver status ...
	I1124 02:51:38.253757   87308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:51:38.266086   87308 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	W1124 02:51:38.275893   87308 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 02:51:38.275974   87308 ssh_runner.go:195] Run: ls
	I1124 02:51:38.280221   87308 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 02:51:38.284501   87308 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 02:51:38.284539   87308 status.go:463] ha-167965 apiserver status = Running (err=<nil>)
	I1124 02:51:38.284550   87308 status.go:176] ha-167965 status: &{Name:ha-167965 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:51:38.284571   87308 status.go:174] checking status of ha-167965-m02 ...
	I1124 02:51:38.284839   87308 cli_runner.go:164] Run: docker container inspect ha-167965-m02 --format={{.State.Status}}
	I1124 02:51:38.302884   87308 status.go:371] ha-167965-m02 host status = "Stopped" (err=<nil>)
	I1124 02:51:38.302904   87308 status.go:384] host is not running, skipping remaining checks
	I1124 02:51:38.302912   87308 status.go:176] ha-167965-m02 status: &{Name:ha-167965-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:51:38.302936   87308 status.go:174] checking status of ha-167965-m03 ...
	I1124 02:51:38.303196   87308 cli_runner.go:164] Run: docker container inspect ha-167965-m03 --format={{.State.Status}}
	I1124 02:51:38.322327   87308 status.go:371] ha-167965-m03 host status = "Running" (err=<nil>)
	I1124 02:51:38.322349   87308 host.go:66] Checking if "ha-167965-m03" exists ...
	I1124 02:51:38.322602   87308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-167965-m03
	I1124 02:51:38.340499   87308 host.go:66] Checking if "ha-167965-m03" exists ...
	I1124 02:51:38.340737   87308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:51:38.340770   87308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-167965-m03
	I1124 02:51:38.359410   87308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/ha-167965-m03/id_rsa Username:docker}
	I1124 02:51:38.456704   87308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:51:38.470357   87308 kubeconfig.go:125] found "ha-167965" server: "https://192.168.49.254:8443"
	I1124 02:51:38.470391   87308 api_server.go:166] Checking apiserver status ...
	I1124 02:51:38.470436   87308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:51:38.483235   87308 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1307/cgroup
	W1124 02:51:38.491978   87308 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1307/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 02:51:38.492023   87308 ssh_runner.go:195] Run: ls
	I1124 02:51:38.496015   87308 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 02:51:38.500412   87308 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 02:51:38.500440   87308 status.go:463] ha-167965-m03 apiserver status = Running (err=<nil>)
	I1124 02:51:38.500450   87308 status.go:176] ha-167965-m03 status: &{Name:ha-167965-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:51:38.500468   87308 status.go:174] checking status of ha-167965-m04 ...
	I1124 02:51:38.500804   87308 cli_runner.go:164] Run: docker container inspect ha-167965-m04 --format={{.State.Status}}
	I1124 02:51:38.518816   87308 status.go:371] ha-167965-m04 host status = "Running" (err=<nil>)
	I1124 02:51:38.518840   87308 host.go:66] Checking if "ha-167965-m04" exists ...
	I1124 02:51:38.519149   87308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-167965-m04
	I1124 02:51:38.536361   87308 host.go:66] Checking if "ha-167965-m04" exists ...
	I1124 02:51:38.536705   87308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:51:38.536749   87308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-167965-m04
	I1124 02:51:38.555995   87308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/ha-167965-m04/id_rsa Username:docker}
	I1124 02:51:38.653900   87308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:51:38.666201   87308 status.go:176] ha-167965-m04 status: &{Name:ha-167965-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 node start m02 --alsologtostderr -v 5
E1124 02:51:46.515023    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 node start m02 --alsologtostderr -v 5: (7.726445365s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 stop --alsologtostderr -v 5
E1124 02:52:03.963227    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:03.969617    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:03.981002    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:04.002439    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:04.043878    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:04.125314    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:04.286873    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:04.608629    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:05.250714    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:06.532614    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:09.094921    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:14.216243    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:52:24.457706    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 stop --alsologtostderr -v 5: (37.305547895s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 start --wait true --alsologtostderr -v 5
E1124 02:52:44.939966    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 start --wait true --alsologtostderr -v 5: (58.605762849s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 node delete m03 --alsologtostderr -v 5
E1124 02:53:25.901467    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 node delete m03 --alsologtostderr -v 5: (8.536400104s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 stop --alsologtostderr -v 5: (36.036145277s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5: exit status 7 (126.333855ms)

                                                
                                                
-- stdout --
	ha-167965
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-167965-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-167965-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:54:11.199391  103708 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:54:11.199486  103708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:54:11.199494  103708 out.go:374] Setting ErrFile to fd 2...
	I1124 02:54:11.199498  103708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:54:11.199682  103708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 02:54:11.199844  103708 out.go:368] Setting JSON to false
	I1124 02:54:11.199868  103708 mustload.go:66] Loading cluster: ha-167965
	I1124 02:54:11.199995  103708 notify.go:221] Checking for updates...
	I1124 02:54:11.200234  103708 config.go:182] Loaded profile config "ha-167965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 02:54:11.200247  103708 status.go:174] checking status of ha-167965 ...
	I1124 02:54:11.200640  103708 cli_runner.go:164] Run: docker container inspect ha-167965 --format={{.State.Status}}
	I1124 02:54:11.222289  103708 status.go:371] ha-167965 host status = "Stopped" (err=<nil>)
	I1124 02:54:11.222344  103708 status.go:384] host is not running, skipping remaining checks
	I1124 02:54:11.222354  103708 status.go:176] ha-167965 status: &{Name:ha-167965 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:54:11.222395  103708 status.go:174] checking status of ha-167965-m02 ...
	I1124 02:54:11.222648  103708 cli_runner.go:164] Run: docker container inspect ha-167965-m02 --format={{.State.Status}}
	I1124 02:54:11.240857  103708 status.go:371] ha-167965-m02 host status = "Stopped" (err=<nil>)
	I1124 02:54:11.240875  103708 status.go:384] host is not running, skipping remaining checks
	I1124 02:54:11.240881  103708 status.go:176] ha-167965-m02 status: &{Name:ha-167965-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:54:11.240900  103708 status.go:174] checking status of ha-167965-m04 ...
	I1124 02:54:11.241131  103708 cli_runner.go:164] Run: docker container inspect ha-167965-m04 --format={{.State.Status}}
	I1124 02:54:11.259624  103708 status.go:371] ha-167965-m04 host status = "Stopped" (err=<nil>)
	I1124 02:54:11.259642  103708 status.go:384] host is not running, skipping remaining checks
	I1124 02:54:11.259648  103708 status.go:176] ha-167965-m04 status: &{Name:ha-167965-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1124 02:54:47.823657    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (56.191010847s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-167965 node add --control-plane --alsologtostderr -v 5: (1m19.968734597s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-167965 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-678107 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1124 02:56:46.514450    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:57:03.962857    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-678107 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (38.383472855s)
--- PASS: TestJSONOutput/start/Command (38.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-678107 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-678107 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-678107 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-678107 --output=json --user=testUser: (5.847593569s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-084770 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-084770 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.450932ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"45c9b85c-5c09-4405-b1a0-8a47e4aef9bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-084770] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e65f0d36-b244-401f-b12f-c22e88847252","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21975"}}
	{"specversion":"1.0","id":"e66cfe5f-3a14-4d03-a3a8-eb915a1f896b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82e87a83-1877-40fe-8b08-e7aea7d5f940","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig"}}
	{"specversion":"1.0","id":"d4775b94-a21a-4621-b552-cd568b23e45f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube"}}
	{"specversion":"1.0","id":"cb553970-1bcc-475a-97da-413070cb7551","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a015a232-ddd5-4867-a053-421709c26ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"98f7f5a6-2663-48b6-b420-0918924faf3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-084770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-084770
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-116325 --network=
E1124 02:57:31.665656    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-116325 --network=: (33.860472976s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-116325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-116325
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-116325: (2.150803601s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.03s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-994744 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-994744 --network=bridge: (21.203875508s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-994744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-994744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-994744: (2.046297048s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.27s)

                                                
                                    
x
+
TestKicExistingNetwork (26.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 02:58:28.479104    8429 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 02:58:28.495802    8429 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 02:58:28.495873    8429 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 02:58:28.495889    8429 cli_runner.go:164] Run: docker network inspect existing-network
W1124 02:58:28.513008    8429 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 02:58:28.513044    8429 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 02:58:28.513060    8429 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 02:58:28.513185    8429 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 02:58:28.531344    8429 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-38af09d29309 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:6f:c7:33:17:67} reservation:<nil>}
I1124 02:58:28.531684    8429 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df2df0}
I1124 02:58:28.531708    8429 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 02:58:28.531753    8429 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 02:58:28.580605    8429 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-634703 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-634703 --network=existing-network: (24.151394026s)
helpers_test.go:175: Cleaning up "existing-network-634703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-634703
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-634703: (1.998357814s)
I1124 02:58:54.747612    8429 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.29s)

                                                
                                    
x
+
TestKicCustomSubnet (23.63s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-185884 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-185884 --subnet=192.168.60.0/24: (21.472746167s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-185884 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-185884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-185884
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-185884: (2.134409821s)
--- PASS: TestKicCustomSubnet (23.63s)

                                                
                                    
x
+
TestKicStaticIP (26.35s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-580840 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-580840 --static-ip=192.168.200.200: (24.044835537s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-580840 ip
helpers_test.go:175: Cleaning up "static-ip-580840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-580840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-580840: (2.154835721s)
--- PASS: TestKicStaticIP (26.35s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-821110 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-821110 --driver=docker  --container-runtime=containerd: (19.362944836s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-823840 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-823840 --driver=docker  --container-runtime=containerd: (22.560997142s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-821110
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-823840
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-823840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-823840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-823840: (2.332472502s)
helpers_test.go:175: Cleaning up "first-821110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-821110
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-821110: (2.388241582s)
--- PASS: TestMinikubeProfile (47.89s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-366224 --memory=3072 --mount-string /tmp/TestMountStartserial4189488601/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-366224 --memory=3072 --mount-string /tmp/TestMountStartserial4189488601/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.450031553s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-366224 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-378594 --memory=3072 --mount-string /tmp/TestMountStartserial4189488601/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-378594 --memory=3072 --mount-string /tmp/TestMountStartserial4189488601/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.512308586s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-378594 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-366224 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-366224 --alsologtostderr -v=5: (1.676425408s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-378594 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-378594
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-378594: (1.271528227s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-378594
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-378594: (6.609114743s)
--- PASS: TestMountStart/serial/RestartStopped (7.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-378594 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-932524 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1124 03:01:46.516055    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:02:03.962967    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-932524 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.879226178s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-932524 -- rollout status deployment/busybox: (3.238163774s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-52bc2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-zt2wx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-52bc2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-zt2wx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-52bc2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-zt2wx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-52bc2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-52bc2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-zt2wx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-932524 -- exec busybox-7b57f96db7-zt2wx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-932524 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-932524 -v=5 --alsologtostderr: (22.944814136s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-932524 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp testdata/cp-test.txt multinode-932524:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp multinode-932524:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1337193317/001/cp-test_multinode-932524.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp multinode-932524:/home/docker/cp-test.txt multinode-932524-m02:/home/docker/cp-test_multinode-932524_multinode-932524-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m02 "sudo cat /home/docker/cp-test_multinode-932524_multinode-932524-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp multinode-932524:/home/docker/cp-test.txt multinode-932524-m03:/home/docker/cp-test_multinode-932524_multinode-932524-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m03 "sudo cat /home/docker/cp-test_multinode-932524_multinode-932524-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp testdata/cp-test.txt multinode-932524-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp multinode-932524-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1337193317/001/cp-test_multinode-932524-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp multinode-932524-m02:/home/docker/cp-test.txt multinode-932524:/home/docker/cp-test_multinode-932524-m02_multinode-932524.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524 "sudo cat /home/docker/cp-test_multinode-932524-m02_multinode-932524.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp multinode-932524-m02:/home/docker/cp-test.txt multinode-932524-m03:/home/docker/cp-test_multinode-932524-m02_multinode-932524-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m03 "sudo cat /home/docker/cp-test_multinode-932524-m02_multinode-932524-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp testdata/cp-test.txt multinode-932524-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp multinode-932524-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1337193317/001/cp-test_multinode-932524-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp multinode-932524-m03:/home/docker/cp-test.txt multinode-932524:/home/docker/cp-test_multinode-932524-m03_multinode-932524.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524 "sudo cat /home/docker/cp-test_multinode-932524-m03_multinode-932524.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 cp multinode-932524-m03:/home/docker/cp-test.txt multinode-932524-m02:/home/docker/cp-test_multinode-932524-m03_multinode-932524-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 ssh -n multinode-932524-m02 "sudo cat /home/docker/cp-test_multinode-932524-m03_multinode-932524-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-932524 node stop m03: (1.261868202s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-932524 status: exit status 7 (505.652501ms)

                                                
                                                
-- stdout --
	multinode-932524
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-932524-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-932524-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-932524 status --alsologtostderr: exit status 7 (497.929676ms)

                                                
                                                
-- stdout --
	multinode-932524
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-932524-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-932524-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:02:48.108099  166023 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:02:48.108196  166023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:02:48.108204  166023 out.go:374] Setting ErrFile to fd 2...
	I1124 03:02:48.108208  166023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:02:48.108401  166023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:02:48.108552  166023 out.go:368] Setting JSON to false
	I1124 03:02:48.108576  166023 mustload.go:66] Loading cluster: multinode-932524
	I1124 03:02:48.108614  166023 notify.go:221] Checking for updates...
	I1124 03:02:48.109360  166023 config.go:182] Loaded profile config "multinode-932524": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:02:48.109404  166023 status.go:174] checking status of multinode-932524 ...
	I1124 03:02:48.110507  166023 cli_runner.go:164] Run: docker container inspect multinode-932524 --format={{.State.Status}}
	I1124 03:02:48.130372  166023 status.go:371] multinode-932524 host status = "Running" (err=<nil>)
	I1124 03:02:48.130410  166023 host.go:66] Checking if "multinode-932524" exists ...
	I1124 03:02:48.130672  166023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-932524
	I1124 03:02:48.148853  166023 host.go:66] Checking if "multinode-932524" exists ...
	I1124 03:02:48.149111  166023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:02:48.149167  166023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-932524
	I1124 03:02:48.167823  166023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/multinode-932524/id_rsa Username:docker}
	I1124 03:02:48.264227  166023 ssh_runner.go:195] Run: systemctl --version
	I1124 03:02:48.270804  166023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:02:48.283128  166023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:02:48.338389  166023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-24 03:02:48.329144232 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:02:48.338897  166023 kubeconfig.go:125] found "multinode-932524" server: "https://192.168.67.2:8443"
	I1124 03:02:48.338924  166023 api_server.go:166] Checking apiserver status ...
	I1124 03:02:48.338960  166023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:02:48.350950  166023 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1295/cgroup
	W1124 03:02:48.359363  166023 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1295/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:02:48.359423  166023 ssh_runner.go:195] Run: ls
	I1124 03:02:48.363410  166023 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 03:02:48.367361  166023 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 03:02:48.367385  166023 status.go:463] multinode-932524 apiserver status = Running (err=<nil>)
	I1124 03:02:48.367396  166023 status.go:176] multinode-932524 status: &{Name:multinode-932524 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:02:48.367414  166023 status.go:174] checking status of multinode-932524-m02 ...
	I1124 03:02:48.367682  166023 cli_runner.go:164] Run: docker container inspect multinode-932524-m02 --format={{.State.Status}}
	I1124 03:02:48.386059  166023 status.go:371] multinode-932524-m02 host status = "Running" (err=<nil>)
	I1124 03:02:48.386082  166023 host.go:66] Checking if "multinode-932524-m02" exists ...
	I1124 03:02:48.386401  166023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-932524-m02
	I1124 03:02:48.403718  166023 host.go:66] Checking if "multinode-932524-m02" exists ...
	I1124 03:02:48.404009  166023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:02:48.404049  166023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-932524-m02
	I1124 03:02:48.421045  166023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32915 SSHKeyPath:/home/jenkins/minikube-integration/21975-4883/.minikube/machines/multinode-932524-m02/id_rsa Username:docker}
	I1124 03:02:48.516898  166023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:02:48.529057  166023 status.go:176] multinode-932524-m02 status: &{Name:multinode-932524-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:02:48.529085  166023 status.go:174] checking status of multinode-932524-m03 ...
	I1124 03:02:48.529309  166023 cli_runner.go:164] Run: docker container inspect multinode-932524-m03 --format={{.State.Status}}
	I1124 03:02:48.546929  166023 status.go:371] multinode-932524-m03 host status = "Stopped" (err=<nil>)
	I1124 03:02:48.546948  166023 status.go:384] host is not running, skipping remaining checks
	I1124 03:02:48.546954  166023 status.go:176] multinode-932524-m03 status: &{Name:multinode-932524-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-932524 node start m03 -v=5 --alsologtostderr: (6.215097671s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (68.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-932524
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-932524
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-932524: (25.028831932s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-932524 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-932524 --wait=true -v=5 --alsologtostderr: (43.365117075s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-932524
--- PASS: TestMultiNode/serial/RestartKeepsNodes (68.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-932524 node delete m03: (4.622612669s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-932524 stop: (23.821100402s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-932524 status: exit status 7 (94.949291ms)

                                                
                                                
-- stdout --
	multinode-932524
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-932524-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-932524 status --alsologtostderr: exit status 7 (96.955281ms)

                                                
                                                
-- stdout --
	multinode-932524
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-932524-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:04:33.169555  175767 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:04:33.169808  175767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:04:33.169816  175767 out.go:374] Setting ErrFile to fd 2...
	I1124 03:04:33.169820  175767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:04:33.170076  175767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:04:33.170237  175767 out.go:368] Setting JSON to false
	I1124 03:04:33.170263  175767 mustload.go:66] Loading cluster: multinode-932524
	I1124 03:04:33.170376  175767 notify.go:221] Checking for updates...
	I1124 03:04:33.170568  175767 config.go:182] Loaded profile config "multinode-932524": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:04:33.170582  175767 status.go:174] checking status of multinode-932524 ...
	I1124 03:04:33.171040  175767 cli_runner.go:164] Run: docker container inspect multinode-932524 --format={{.State.Status}}
	I1124 03:04:33.190240  175767 status.go:371] multinode-932524 host status = "Stopped" (err=<nil>)
	I1124 03:04:33.190267  175767 status.go:384] host is not running, skipping remaining checks
	I1124 03:04:33.190276  175767 status.go:176] multinode-932524 status: &{Name:multinode-932524 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:04:33.190311  175767 status.go:174] checking status of multinode-932524-m02 ...
	I1124 03:04:33.190546  175767 cli_runner.go:164] Run: docker container inspect multinode-932524-m02 --format={{.State.Status}}
	I1124 03:04:33.208746  175767 status.go:371] multinode-932524-m02 host status = "Stopped" (err=<nil>)
	I1124 03:04:33.208770  175767 status.go:384] host is not running, skipping remaining checks
	I1124 03:04:33.208787  175767 status.go:176] multinode-932524-m02 status: &{Name:multinode-932524-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-932524 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1124 03:04:49.582444    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-932524 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.686496637s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-932524 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-932524
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-932524-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-932524-m02 --driver=docker  --container-runtime=containerd: exit status 14 (76.464304ms)

                                                
                                                
-- stdout --
	* [multinode-932524-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-932524-m02' is duplicated with machine name 'multinode-932524-m02' in profile 'multinode-932524'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-932524-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-932524-m03 --driver=docker  --container-runtime=containerd: (19.055800258s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-932524
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-932524: exit status 80 (291.094992ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-932524 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-932524-m03 already exists in multinode-932524-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-932524-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-932524-m03: (2.34484527s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.83s)

                                                
                                    
x
+
TestPreload (109.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-880055 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-880055 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (44.487622483s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-880055 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-880055 image pull gcr.io/k8s-minikube/busybox: (2.794376868s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-880055
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-880055: (5.773508075s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-880055 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1124 03:06:46.516025    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:07:03.963728    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-880055 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (54.114724487s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-880055 image list
helpers_test.go:175: Cleaning up "test-preload-880055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-880055
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-880055: (2.439841965s)
--- PASS: TestPreload (109.84s)

                                                
                                    
x
+
TestScheduledStopUnix (98.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-936010 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-936010 --memory=3072 --driver=docker  --container-runtime=containerd: (21.764300885s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-936010 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:07:59.183024  193989 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:07:59.183320  193989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:07:59.183330  193989 out.go:374] Setting ErrFile to fd 2...
	I1124 03:07:59.183335  193989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:07:59.183509  193989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:07:59.183741  193989 out.go:368] Setting JSON to false
	I1124 03:07:59.183865  193989 mustload.go:66] Loading cluster: scheduled-stop-936010
	I1124 03:07:59.184208  193989 config.go:182] Loaded profile config "scheduled-stop-936010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:07:59.184278  193989 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/config.json ...
	I1124 03:07:59.184448  193989 mustload.go:66] Loading cluster: scheduled-stop-936010
	I1124 03:07:59.184543  193989 config.go:182] Loaded profile config "scheduled-stop-936010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-936010 -n scheduled-stop-936010
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-936010 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:07:59.576525  194143 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:07:59.576812  194143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:07:59.576822  194143 out.go:374] Setting ErrFile to fd 2...
	I1124 03:07:59.576826  194143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:07:59.577025  194143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:07:59.577235  194143 out.go:368] Setting JSON to false
	I1124 03:07:59.577426  194143 daemonize_unix.go:73] killing process 194024 as it is an old scheduled stop
	I1124 03:07:59.577535  194143 mustload.go:66] Loading cluster: scheduled-stop-936010
	I1124 03:07:59.578266  194143 config.go:182] Loaded profile config "scheduled-stop-936010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:07:59.578374  194143 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/config.json ...
	I1124 03:07:59.578593  194143 mustload.go:66] Loading cluster: scheduled-stop-936010
	I1124 03:07:59.578748  194143 config.go:182] Loaded profile config "scheduled-stop-936010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 03:07:59.585101    8429 retry.go:31] will retry after 73.387µs: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.586263    8429 retry.go:31] will retry after 136.936µs: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.587420    8429 retry.go:31] will retry after 215.563µs: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.588556    8429 retry.go:31] will retry after 207.014µs: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.589711    8429 retry.go:31] will retry after 273.823µs: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.590853    8429 retry.go:31] will retry after 746.274µs: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.591957    8429 retry.go:31] will retry after 926.795µs: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.593102    8429 retry.go:31] will retry after 1.771655ms: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.595288    8429 retry.go:31] will retry after 2.997391ms: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.598457    8429 retry.go:31] will retry after 5.084547ms: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.603597    8429 retry.go:31] will retry after 2.997559ms: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.606853    8429 retry.go:31] will retry after 5.575224ms: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.613090    8429 retry.go:31] will retry after 16.331904ms: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.630383    8429 retry.go:31] will retry after 16.027045ms: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.646572    8429 retry.go:31] will retry after 29.473805ms: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
I1124 03:07:59.676845    8429 retry.go:31] will retry after 44.992939ms: open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-936010 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-936010 -n scheduled-stop-936010
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-936010
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-936010 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:08:25.490769  195021 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:08:25.491036  195021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:08:25.491045  195021 out.go:374] Setting ErrFile to fd 2...
	I1124 03:08:25.491049  195021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:08:25.491223  195021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:08:25.491481  195021 out.go:368] Setting JSON to false
	I1124 03:08:25.491552  195021 mustload.go:66] Loading cluster: scheduled-stop-936010
	I1124 03:08:25.491859  195021 config.go:182] Loaded profile config "scheduled-stop-936010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:08:25.491918  195021 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/scheduled-stop-936010/config.json ...
	I1124 03:08:25.492090  195021 mustload.go:66] Loading cluster: scheduled-stop-936010
	I1124 03:08:25.492175  195021 config.go:182] Loaded profile config "scheduled-stop-936010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
E1124 03:08:27.027815    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-936010
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-936010: exit status 7 (81.818722ms)

                                                
                                                
-- stdout --
	scheduled-stop-936010
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-936010 -n scheduled-stop-936010
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-936010 -n scheduled-stop-936010: exit status 7 (81.038072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-936010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-936010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-936010: (4.720480696s)
--- PASS: TestScheduledStopUnix (98.02s)

                                                
                                    
x
+
TestInsufficientStorage (9.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-885070 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-885070 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.660441095s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4cc90011-0cad-4ad7-b05b-2c098d1c148e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-885070] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"23225111-5da4-4777-8c92-88ba74a20197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21975"}}
	{"specversion":"1.0","id":"4510097b-2c13-4527-a580-f79fff72e747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9d6b197c-2bf7-470d-a4da-ac7b0b3faa55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig"}}
	{"specversion":"1.0","id":"4343793e-842a-4a6b-ba76-7de575ba7f10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube"}}
	{"specversion":"1.0","id":"df4c42ec-6353-4498-9066-9b7993db4947","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"59b65445-ad40-41ce-99c2-880044c0bc2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bc15d227-0317-4828-8759-a308af20f259","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1981100a-f78c-4af4-8a09-067be891ee9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"64b114cc-3d4a-4b87-bc13-65ab51a85488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9207feed-783c-4178-b25a-8039a5fd9c66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b04e7913-d5de-4b9a-8225-d22830b4a694","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-885070\" primary control-plane node in \"insufficient-storage-885070\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"42d1e427-156e-4839-8ba1-405b3aca5e57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763935653-21975 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a28d209-627f-4db8-a236-1f80ee7b6136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c8504da-3048-422f-a2e9-2b4b0d403576","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-885070 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-885070 --output=json --layout=cluster: exit status 7 (295.728996ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-885070","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-885070","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 03:09:22.325534  197286 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-885070" does not appear in /home/jenkins/minikube-integration/21975-4883/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-885070 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-885070 --output=json --layout=cluster: exit status 7 (290.775084ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-885070","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-885070","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 03:09:22.616763  197397 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-885070" does not appear in /home/jenkins/minikube-integration/21975-4883/kubeconfig
	E1124 03:09:22.627206  197397 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/insufficient-storage-885070/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-885070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-885070
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-885070: (1.89943025s)
--- PASS: TestInsufficientStorage (9.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3799930281 start -p running-upgrade-637901 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3799930281 start -p running-upgrade-637901 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.004360632s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-637901 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-637901 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.54253673s)
helpers_test.go:175: Cleaning up "running-upgrade-637901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-637901
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-637901: (2.236731236s)
--- PASS: TestRunningBinaryUpgrade (61.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (332.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.119741536s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-093930
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-093930: (11.822862875s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-093930 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-093930 status --format={{.Host}}: exit status 7 (83.591035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.790950791s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-093930 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (99.89508ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-093930] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-093930
	    minikube start -p kubernetes-upgrade-093930 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0939302 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-093930 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-093930 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.437306937s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-093930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-093930
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-093930: (4.199584289s)
--- PASS: TestKubernetesUpgrade (332.65s)

                                                
                                    
x
+
TestMissingContainerUpgrade (126.91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3993557842 start -p missing-upgrade-432438 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3993557842 start -p missing-upgrade-432438 --memory=3072 --driver=docker  --container-runtime=containerd: (1m10.984145223s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-432438
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-432438
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-432438 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-432438 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (49.802255166s)
helpers_test.go:175: Cleaning up "missing-upgrade-432438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-432438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-432438: (2.538837251s)
--- PASS: TestMissingContainerUpgrade (126.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                    
x
+
TestPause/serial/Start (48.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-392995 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-392995 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (48.365433885s)
--- PASS: TestPause/serial/Start (48.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1899094125 start -p stopped-upgrade-411350 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1899094125 start -p stopped-upgrade-411350 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m11.324738872s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1899094125 -p stopped-upgrade-411350 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1899094125 -p stopped-upgrade-411350 stop: (1.314709707s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-411350 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-411350 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.799062571s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-392995 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-392995 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.884902047s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.90s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-392995 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-392995 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-392995 --output=json --layout=cluster: exit status 2 (366.973139ms)

                                                
                                                
-- stdout --
	{"Name":"pause-392995","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-392995","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-392995 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-392995 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-392995 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-392995 --alsologtostderr -v=5: (2.832337915s)
--- PASS: TestPause/serial/DeletePaused (2.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-392995
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-392995: exit status 1 (19.179539ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-392995: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-411350
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-411350: (1.333097193s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502612 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-502612 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (78.89162ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-502612] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502612 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1124 03:11:46.515064    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-502612 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.185337182s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-502612 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502612 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1124 03:12:03.963087    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-502612 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (19.903411541s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-502612 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-502612 status -o json: exit status 2 (340.307493ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-502612","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-502612
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-502612: (1.999718154s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-682898 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-682898 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (165.501096ms)

                                                
                                                
-- stdout --
	* [false-682898] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:12:18.700900  243746 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:18.701049  243746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:18.701058  243746 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:18.701065  243746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:18.701280  243746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-4883/.minikube/bin
	I1124 03:12:18.701742  243746 out.go:368] Setting JSON to false
	I1124 03:12:18.702916  243746 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3282,"bootTime":1763950657,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:12:18.702988  243746 start.go:143] virtualization: kvm guest
	I1124 03:12:18.704798  243746 out.go:179] * [false-682898] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:12:18.706299  243746 notify.go:221] Checking for updates...
	I1124 03:12:18.706304  243746 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:12:18.707455  243746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:12:18.708860  243746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-4883/kubeconfig
	I1124 03:12:18.710048  243746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-4883/.minikube
	I1124 03:12:18.711350  243746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:12:18.712509  243746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:12:18.713982  243746 config.go:182] Loaded profile config "NoKubernetes-502612": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1124 03:12:18.714092  243746 config.go:182] Loaded profile config "cert-expiration-004045": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:12:18.714201  243746 config.go:182] Loaded profile config "kubernetes-upgrade-093930": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:12:18.714327  243746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:12:18.739123  243746 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:12:18.739299  243746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:18.799919  243746 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 03:12:18.788515588 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:18.800066  243746 docker.go:319] overlay module found
	I1124 03:12:18.801847  243746 out.go:179] * Using the docker driver based on user configuration
	I1124 03:12:18.803020  243746 start.go:309] selected driver: docker
	I1124 03:12:18.803036  243746 start.go:927] validating driver "docker" against <nil>
	I1124 03:12:18.803048  243746 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:12:18.804714  243746 out.go:203] 
	W1124 03:12:18.805869  243746 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1124 03:12:18.807045  243746 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-682898 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-682898" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:12:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-502612
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:11:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-004045
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:11:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-093930
contexts:
- context:
cluster: NoKubernetes-502612
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:12:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-502612
name: NoKubernetes-502612
- context:
cluster: cert-expiration-004045
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:11:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-004045
name: cert-expiration-004045
- context:
cluster: kubernetes-upgrade-093930
user: kubernetes-upgrade-093930
name: kubernetes-upgrade-093930
current-context: ""
kind: Config
users:
- name: NoKubernetes-502612
user:
client-certificate: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/NoKubernetes-502612/client.crt
client-key: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/NoKubernetes-502612/client.key
- name: cert-expiration-004045
user:
client-certificate: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/cert-expiration-004045/client.crt
client-key: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/cert-expiration-004045/client.key
- name: kubernetes-upgrade-093930
user:
client-certificate: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/kubernetes-upgrade-093930/client.crt
client-key: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/kubernetes-upgrade-093930/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-682898

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-682898"

                                                
                                                
----------------------- debugLogs end: false-682898 [took: 3.124091078s] --------------------------------
helpers_test.go:175: Cleaning up "false-682898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-682898
--- PASS: TestNetworkPlugins/group/false (3.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502612 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-502612 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.535024411s)
--- PASS: TestNoKubernetes/serial/Start (7.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (50.870537573s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21975-4883/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-502612 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-502612 "sudo systemctl is-active --quiet service kubelet": exit status 1 (336.922411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (45.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (28.581102559s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.519031217s)
--- PASS: TestNoKubernetes/serial/ProfileList (45.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-502612
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-502612: (1.26597073s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502612 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-502612 --driver=docker  --container-runtime=containerd: (6.687867084s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-502612 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-502612 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.299411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.665597981s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-838815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-838815 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-838815 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-838815 --alsologtostderr -v=3: (12.795172743s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838815 -n old-k8s-version-838815
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838815 -n old-k8s-version-838815: exit status 7 (93.374316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-838815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-838815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.158206346s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838815 -n old-k8s-version-838815
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7w59g" [aa6f8de3-31d5-42ae-a092-8326e6d563c7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003052826s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-182765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-182765 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-182765 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-182765 --alsologtostderr -v=3: (12.064726929s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7w59g" [aa6f8de3-31d5-42ae-a092-8326e6d563c7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003150766s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-838815 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-838815 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-838815 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838815 -n old-k8s-version-838815
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838815 -n old-k8s-version-838815: exit status 2 (333.827705ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-838815 -n old-k8s-version-838815
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-838815 -n old-k8s-version-838815: exit status 2 (321.806168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-838815 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838815 -n old-k8s-version-838815
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-838815 -n old-k8s-version-838815
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-182765 -n no-preload-182765
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-182765 -n no-preload-182765: exit status 7 (87.942483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-182765 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-182765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.058576537s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-182765 -n no-preload-182765
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.640289964s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-983163 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-983163 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.136802291s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sx2zv" [568f241b-4f36-4d56-8e47-c6993e510a55] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004378449s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sx2zv" [568f241b-4f36-4d56-8e47-c6993e510a55] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003539509s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-182765 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-182765 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-182765 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-182765 -n no-preload-182765
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-182765 -n no-preload-182765: exit status 2 (386.705897ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-182765 -n no-preload-182765
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-182765 -n no-preload-182765: exit status 2 (382.397355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-182765 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-182765 -n no-preload-182765
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-182765 -n no-preload-182765
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-427637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-427637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.093471517s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-427637 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (14.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-427637 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-427637 --alsologtostderr -v=3: (14.385727494s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (14.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-531301 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-531301 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (25.745666518s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (46.067370201s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-427637 -n embed-certs-427637
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-427637 -n embed-certs-427637: exit status 7 (102.168894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-427637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-427637 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.545345082s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-427637 -n embed-certs-427637
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-983163 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-983163 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-983163 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-983163 --alsologtostderr -v=3: (12.250100977s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-531301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-531301 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-531301 --alsologtostderr -v=3: (1.387539472s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531301 -n newest-cni-531301
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531301 -n newest-cni-531301: exit status 7 (94.141506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-531301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-531301 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-531301 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (10.419128199s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531301 -n newest-cni-531301
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163: exit status 7 (144.534014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-983163 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-983163 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-983163 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.538216516s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-531301 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-531301 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531301 -n newest-cni-531301
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531301 -n newest-cni-531301: exit status 2 (321.439044ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-531301 -n newest-cni-531301
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-531301 -n newest-cni-531301: exit status 2 (330.307761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-531301 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531301 -n newest-cni-531301
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-531301 -n newest-cni-531301
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (42.787491041s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-682898 "pgrep -a kubelet"
I1124 03:16:45.137320    8429 config.go:182] Loaded profile config "auto-682898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-682898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xhxpt" [24c88fa8-f602-4624-8395-d41af32ea989] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 03:16:46.515107    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/addons-982350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xhxpt" [24c88fa8-f602-4624-8395-d41af32ea989] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004696621s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-682898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-brhxv" [ad8460ec-1c45-4a04-8d08-cb5530fd84f6] Running
E1124 03:17:03.962987    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/functional-524458/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003747968s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-brhxv" [ad8460ec-1c45-4a04-8d08-cb5530fd84f6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003335284s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-427637 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-427637 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-427637 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-427637 -n embed-certs-427637
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-427637 -n embed-certs-427637: exit status 2 (360.062202ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-427637 -n embed-certs-427637
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-427637 -n embed-certs-427637: exit status 2 (361.382827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-427637 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-427637 -n embed-certs-427637
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-427637 -n embed-certs-427637
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (52.361596966s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.874559334s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w4zph" [7f5c0e02-252d-4c30-a38e-613324ed0165] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003499051s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-jh9sv" [343e505a-14e7-42b5-bab2-2641c9089266] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004559115s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w4zph" [7f5c0e02-252d-4c30-a38e-613324ed0165] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00368445s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-983163 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-983163 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-983163 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163: exit status 2 (359.543464ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163: exit status 2 (341.443219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-983163 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-983163 -n default-k8s-diff-port-983163
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-682898 "pgrep -a kubelet"
I1124 03:17:27.772318    8429 config.go:182] Loaded profile config "kindnet-682898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-682898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5m9qw" [5ecfc503-b2f7-41c5-829e-ec53db22f30f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5m9qw" [5ecfc503-b2f7-41c5-829e-ec53db22f30f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003387261s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m8.099805724s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-682898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.550997621s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-s9l82" [52bc3a77-956a-45b9-a315-2aee635f3817] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-s9l82" [52bc3a77-956a-45b9-a315-2aee635f3817] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004586978s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-682898 "pgrep -a kubelet"
I1124 03:18:12.256517    8429 config.go:182] Loaded profile config "calico-682898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-682898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j86xz" [e7e271ce-029c-4ab6-a5cf-dfcf2deaeff6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j86xz" [e7e271ce-029c-4ab6-a5cf-dfcf2deaeff6] Running
E1124 03:18:17.075375    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:18:17.081712    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:18:17.093103    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:18:17.114511    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:18:17.155875    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003677212s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-682898 "pgrep -a kubelet"
I1124 03:18:13.929942    8429 config.go:182] Loaded profile config "custom-flannel-682898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-682898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d2wz2" [41dfdf0e-c6de-4ca7-9b3b-88cef11e0b62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d2wz2" [41dfdf0e-c6de-4ca7-9b3b-88cef11e0b62] Running
E1124 03:18:17.237879    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:18:17.399264    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:18:17.720574    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:18:18.362302    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:18:19.644265    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004694693s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-682898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-682898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-682898 "pgrep -a kubelet"
I1124 03:18:42.096521    8429 config.go:182] Loaded profile config "enable-default-cni-682898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-682898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-74zrt" [a622641c-c692-4dbe-8ffb-aa245a8a458a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-74zrt" [a622641c-c692-4dbe-8ffb-aa245a8a458a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004054881s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-682898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (37.505971457s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-682898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-wrr84" [edc9dbf0-9418-4431-a980-bd809de0533a] Running
E1124 03:18:58.051441    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/old-k8s-version-838815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00406239s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-682898 "pgrep -a kubelet"
I1124 03:18:58.846295    8429 config.go:182] Loaded profile config "flannel-682898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-682898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sfbc5" [a53aa5bd-2487-44cb-b9ff-e0c4fa99b24a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sfbc5" [a53aa5bd-2487-44cb-b9ff-e0c4fa99b24a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003802226s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-682898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-682898 "pgrep -a kubelet"
I1124 03:19:21.238748    8429 config.go:182] Loaded profile config "bridge-682898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-682898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4cm6b" [6776af3c-59f9-40db-9266-258dde9bd4c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 03:19:21.681696    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:19:21.688952    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:19:21.700342    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:19:21.721751    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:19:21.763210    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:19:21.844739    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:19:22.006895    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:19:22.328524    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:19:22.970330    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:19:24.251603    8429 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/no-preload-182765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-4cm6b" [6776af3c-59f9-40db-9266-258dde9bd4c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003704595s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-682898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-682898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (26/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-602172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-602172
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-682898 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-682898" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:12:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-502612
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:11:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-004045
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:11:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-093930
contexts:
- context:
cluster: NoKubernetes-502612
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:12:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-502612
name: NoKubernetes-502612
- context:
cluster: cert-expiration-004045
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:11:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-004045
name: cert-expiration-004045
- context:
cluster: kubernetes-upgrade-093930
user: kubernetes-upgrade-093930
name: kubernetes-upgrade-093930
current-context: ""
kind: Config
users:
- name: NoKubernetes-502612
user:
client-certificate: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/NoKubernetes-502612/client.crt
client-key: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/NoKubernetes-502612/client.key
- name: cert-expiration-004045
user:
client-certificate: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/cert-expiration-004045/client.crt
client-key: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/cert-expiration-004045/client.key
- name: kubernetes-upgrade-093930
user:
client-certificate: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/kubernetes-upgrade-093930/client.crt
client-key: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/kubernetes-upgrade-093930/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-682898

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-682898"

                                                
                                                
----------------------- debugLogs end: kubenet-682898 [took: 3.140929391s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-682898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-682898
--- SKIP: TestNetworkPlugins/group/kubenet (3.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-682898 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-682898" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:12:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-502612
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:11:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-004045
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-4883/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:11:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-093930
contexts:
- context:
cluster: NoKubernetes-502612
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:12:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-502612
name: NoKubernetes-502612
- context:
cluster: cert-expiration-004045
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:11:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-004045
name: cert-expiration-004045
- context:
cluster: kubernetes-upgrade-093930
user: kubernetes-upgrade-093930
name: kubernetes-upgrade-093930
current-context: ""
kind: Config
users:
- name: NoKubernetes-502612
user:
client-certificate: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/NoKubernetes-502612/client.crt
client-key: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/NoKubernetes-502612/client.key
- name: cert-expiration-004045
user:
client-certificate: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/cert-expiration-004045/client.crt
client-key: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/cert-expiration-004045/client.key
- name: kubernetes-upgrade-093930
user:
client-certificate: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/kubernetes-upgrade-093930/client.crt
client-key: /home/jenkins/minikube-integration/21975-4883/.minikube/profiles/kubernetes-upgrade-093930/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-682898

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-682898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-682898"

                                                
                                                
----------------------- debugLogs end: cilium-682898 [took: 3.76263476s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-682898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-682898
--- SKIP: TestNetworkPlugins/group/cilium (3.94s)

                                                
                                    
Copied to clipboard