Test Report: Docker_Linux_containerd_arm64 21508

                    
                      8932374f20a738e68cf28dc9e127463468f1eb30:2025-09-08:41334
                    
                

Test fail (9/326)

x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2x25h" [834daeba-b747-4a9b-92f8-8b6002a56239] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-552634 -n old-k8s-version-552634
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-08 13:27:50.21233544 +0000 UTC m=+3300.547870521
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-552634 describe po kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-552634 describe po kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-2x25h
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-552634/192.168.76.2
Start Time:       Mon, 08 Sep 2025 13:18:26 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-94n7n (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-94n7n:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m24s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h to old-k8s-version-552634
Normal   Pulling    7m44s (x4 over 9m23s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     7m43s (x4 over 9m18s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m43s (x4 over 9m18s)   kubelet            Error: ErrImagePull
Warning  Failed     7m30s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    4m14s (x20 over 9m17s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-552634 logs kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-552634 logs kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard: exit status 1 (132.241411ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-2x25h" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-552634 logs kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-552634
helpers_test.go:243: (dbg) docker inspect old-k8s-version-552634:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5",
	        "Created": "2025-09-08T13:16:31.223099842Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2954522,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:18:01.325654388Z",
	            "FinishedAt": "2025-09-08T13:18:00.352577855Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/hostname",
	        "HostsPath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/hosts",
	        "LogPath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5-json.log",
	        "Name": "/old-k8s-version-552634",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-552634:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-552634",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5",
	                "LowerDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5-init/diff:/var/lib/docker/overlay2/665ff8bb3d07b8342629df037737f3667c7c59d9d1f85930dc3dfdf138460626/diff",
	                "MergedDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-552634",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-552634/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-552634",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-552634",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-552634",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1054ee47c483a1428606e5542d6cd92f987e38d7fe61c05d8f0b2f04b8c0d12a",
	            "SandboxKey": "/var/run/docker/netns/1054ee47c483",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36723"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36724"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36727"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36725"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36726"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-552634": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:bc:34:57:9c:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31390a5008f77087656c6be660b8e10e100917da7426cbc36a65283e697b2fb6",
	                    "EndpointID": "cb35276bdbd42ef51d3288d1ebaf44c4e25b96a9f437d50b2f35153c806c1498",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-552634",
	                        "35ad4f8124aa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-552634 -n old-k8s-version-552634
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-552634 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-552634 logs -n 25: (2.078355107s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ force-systemd-env-386836 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-386836 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ delete  │ -p force-systemd-env-386836                                                                                                                                                                                                                         │ force-systemd-env-386836 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ start   │ -p cert-expiration-713579 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-713579   │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ start   │ -p pause-864887 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ pause   │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ unpause │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ pause   │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ delete  │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ delete  │ -p pause-864887                                                                                                                                                                                                                                     │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ start   │ -p cert-options-480035 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:16 UTC │
	│ ssh     │ cert-options-480035 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ ssh     │ -p cert-options-480035 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ delete  │ -p cert-options-480035                                                                                                                                                                                                                              │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ start   │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-552634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:17 UTC │
	│ stop    │ -p old-k8s-version-552634 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-552634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ start   │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ start   │ -p cert-expiration-713579 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-713579   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ delete  │ -p cert-expiration-713579                                                                                                                                                                                                                           │ cert-expiration-713579   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:19 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:19 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ stop    │ -p no-preload-978911 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable dashboard -p no-preload-978911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:21 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:20:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:20:35.931338 2962976 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:20:35.931455 2962976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:20:35.931470 2962976 out.go:374] Setting ErrFile to fd 2...
	I0908 13:20:35.931478 2962976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:20:35.931739 2962976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:20:35.932098 2962976 out.go:368] Setting JSON to false
	I0908 13:20:35.933046 2962976 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":68588,"bootTime":1757269048,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 13:20:35.933113 2962976 start.go:140] virtualization:  
	I0908 13:20:35.938091 2962976 out.go:179] * [no-preload-978911] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:20:35.941259 2962976 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:20:35.941303 2962976 notify.go:220] Checking for updates...
	I0908 13:20:35.946964 2962976 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:20:35.949861 2962976 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:20:35.952715 2962976 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 13:20:35.956376 2962976 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:20:35.959333 2962976 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:20:35.962746 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:20:35.963352 2962976 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:20:35.991533 2962976 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:20:35.991638 2962976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:20:36.072292 2962976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:20:36.052728775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:20:36.072406 2962976 docker.go:318] overlay module found
	I0908 13:20:36.075656 2962976 out.go:179] * Using the docker driver based on existing profile
	I0908 13:20:36.078527 2962976 start.go:304] selected driver: docker
	I0908 13:20:36.078546 2962976 start.go:918] validating driver "docker" against &{Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:20:36.078664 2962976 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:20:36.079452 2962976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:20:36.145451 2962976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:20:36.13563539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:20:36.145819 2962976 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:20:36.145841 2962976 cni.go:84] Creating CNI manager for ""
	I0908 13:20:36.145901 2962976 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:20:36.145935 2962976 start.go:348] cluster config:
	{Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:20:36.150798 2962976 out.go:179] * Starting "no-preload-978911" primary control-plane node in "no-preload-978911" cluster
	I0908 13:20:36.153634 2962976 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:20:36.156700 2962976 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:20:36.159655 2962976 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:20:36.159871 2962976 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/config.json ...
	I0908 13:20:36.160216 2962976 cache.go:107] acquiring lock: {Name:mk9f7cd9bf685dbdd22a939bba5743203e9424b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160296 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0908 13:20:36.160304 2962976 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.209µs
	I0908 13:20:36.160319 2962976 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0908 13:20:36.159703 2962976 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:20:36.160406 2962976 cache.go:107] acquiring lock: {Name:mka44a87e995f06fac0280236e9044a05cbf0c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160447 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0908 13:20:36.160453 2962976 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0" took 50.452µs
	I0908 13:20:36.160460 2962976 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0908 13:20:36.160483 2962976 cache.go:107] acquiring lock: {Name:mkb0b6bdd176d599d5a383a38a60d5e44912d326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160512 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0908 13:20:36.160517 2962976 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0" took 35.248µs
	I0908 13:20:36.160522 2962976 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0908 13:20:36.160531 2962976 cache.go:107] acquiring lock: {Name:mk3a1846ff1d17320a61c4f0cd7f03a465580c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160557 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0908 13:20:36.160562 2962976 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0" took 31.646µs
	I0908 13:20:36.160568 2962976 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0908 13:20:36.160576 2962976 cache.go:107] acquiring lock: {Name:mk1c90c15fea0bf3c7271fb14d259c914df38d83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160600 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0908 13:20:36.160605 2962976 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0" took 29.727µs
	I0908 13:20:36.160612 2962976 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0908 13:20:36.160622 2962976 cache.go:107] acquiring lock: {Name:mke7032f762990626a62b2503bb54454bb8e4428 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160650 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I0908 13:20:36.160655 2962976 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 33.541µs
	I0908 13:20:36.160660 2962976 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0908 13:20:36.160669 2962976 cache.go:107] acquiring lock: {Name:mk8b0387706fadd68f571a10efda673c0c270d63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160693 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0908 13:20:36.160698 2962976 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.12µs
	I0908 13:20:36.160709 2962976 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0908 13:20:36.160719 2962976 cache.go:107] acquiring lock: {Name:mk33ca43d20f07ddc371c694dc9c7a9ebcb088c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160744 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0908 13:20:36.160749 2962976 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.663µs
	I0908 13:20:36.160754 2962976 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0908 13:20:36.160760 2962976 cache.go:87] Successfully saved all images to host disk.
	I0908 13:20:36.180376 2962976 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:20:36.180401 2962976 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:20:36.180415 2962976 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:20:36.180445 2962976 start.go:360] acquireMachinesLock for no-preload-978911: {Name:mk7699a0142cc873eeb1530cb26c114199650434 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.180501 2962976 start.go:364] duration metric: took 35.707µs to acquireMachinesLock for "no-preload-978911"
	I0908 13:20:36.180529 2962976 start.go:96] Skipping create...Using existing machine configuration
	I0908 13:20:36.180538 2962976 fix.go:54] fixHost starting: 
	I0908 13:20:36.180803 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:36.198068 2962976 fix.go:112] recreateIfNeeded on no-preload-978911: state=Stopped err=<nil>
	W0908 13:20:36.198097 2962976 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 13:20:36.201397 2962976 out.go:252] * Restarting existing docker container for "no-preload-978911" ...
	I0908 13:20:36.201532 2962976 cli_runner.go:164] Run: docker start no-preload-978911
	I0908 13:20:36.450658 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:36.474394 2962976 kic.go:430] container "no-preload-978911" state is running.
	I0908 13:20:36.474790 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
	I0908 13:20:36.494941 2962976 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/config.json ...
	I0908 13:20:36.495172 2962976 machine.go:93] provisionDockerMachine start ...
	I0908 13:20:36.495236 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:36.514963 2962976 main.go:141] libmachine: Using SSH client type: native
	I0908 13:20:36.515301 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36733 <nil> <nil>}
	I0908 13:20:36.515314 2962976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:20:36.515903 2962976 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55582->127.0.0.1:36733: read: connection reset by peer
	I0908 13:20:39.637915 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-978911
	
	I0908 13:20:39.637978 2962976 ubuntu.go:182] provisioning hostname "no-preload-978911"
	I0908 13:20:39.638058 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:39.656178 2962976 main.go:141] libmachine: Using SSH client type: native
	I0908 13:20:39.656497 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36733 <nil> <nil>}
	I0908 13:20:39.656514 2962976 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-978911 && echo "no-preload-978911" | sudo tee /etc/hostname
	I0908 13:20:39.795726 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-978911
	
	I0908 13:20:39.795805 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:39.814502 2962976 main.go:141] libmachine: Using SSH client type: native
	I0908 13:20:39.814810 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36733 <nil> <nil>}
	I0908 13:20:39.814835 2962976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-978911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-978911/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-978911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:20:39.950789 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:20:39.950812 2962976 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
	I0908 13:20:39.950834 2962976 ubuntu.go:190] setting up certificates
	I0908 13:20:39.950843 2962976 provision.go:84] configureAuth start
	I0908 13:20:39.950907 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
	I0908 13:20:39.972214 2962976 provision.go:143] copyHostCerts
	I0908 13:20:39.972295 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
	I0908 13:20:39.972317 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
	I0908 13:20:39.972393 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
	I0908 13:20:39.972496 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
	I0908 13:20:39.972501 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
	I0908 13:20:39.972526 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
	I0908 13:20:39.972586 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
	I0908 13:20:39.972591 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
	I0908 13:20:39.972613 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
	I0908 13:20:39.972667 2962976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.no-preload-978911 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-978911]
	I0908 13:20:40.245168 2962976 provision.go:177] copyRemoteCerts
	I0908 13:20:40.245243 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:20:40.245295 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.263254 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.355579 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 13:20:40.380935 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 13:20:40.406306 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 13:20:40.432096 2962976 provision.go:87] duration metric: took 481.231644ms to configureAuth
	I0908 13:20:40.432126 2962976 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:20:40.432326 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:20:40.432340 2962976 machine.go:96] duration metric: took 3.937157056s to provisionDockerMachine
	I0908 13:20:40.432348 2962976 start.go:293] postStartSetup for "no-preload-978911" (driver="docker")
	I0908 13:20:40.432359 2962976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:20:40.432420 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:20:40.432470 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.449780 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.539192 2962976 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:20:40.543086 2962976 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:20:40.543119 2962976 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:20:40.543129 2962976 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:20:40.543142 2962976 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:20:40.543156 2962976 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
	I0908 13:20:40.543213 2962976 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
	I0908 13:20:40.543299 2962976 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
	I0908 13:20:40.543407 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 13:20:40.552362 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:20:40.577039 2962976 start.go:296] duration metric: took 144.675775ms for postStartSetup
	I0908 13:20:40.577118 2962976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:20:40.577178 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.593852 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.684143 2962976 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:20:40.688435 2962976 fix.go:56] duration metric: took 4.507889752s for fixHost
	I0908 13:20:40.688464 2962976 start.go:83] releasing machines lock for "no-preload-978911", held for 4.507944625s
	I0908 13:20:40.688533 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
	I0908 13:20:40.705355 2962976 ssh_runner.go:195] Run: cat /version.json
	I0908 13:20:40.705419 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.705609 2962976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:20:40.705666 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.727686 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.735980 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.822021 2962976 ssh_runner.go:195] Run: systemctl --version
	I0908 13:20:40.992138 2962976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:20:40.996696 2962976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 13:20:41.017301 2962976 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:20:41.017379 2962976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:20:41.026660 2962976 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 13:20:41.026732 2962976 start.go:495] detecting cgroup driver to use...
	I0908 13:20:41.026779 2962976 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:20:41.026849 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 13:20:41.041661 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 13:20:41.053793 2962976 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:20:41.053929 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:20:41.068141 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:20:41.079992 2962976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:20:41.158894 2962976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:20:41.250698 2962976 docker.go:234] disabling docker service ...
	I0908 13:20:41.250775 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:20:41.265554 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:20:41.277916 2962976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:20:41.368244 2962976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:20:41.462413 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:20:41.475092 2962976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:20:41.493271 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 13:20:41.505845 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 13:20:41.517791 2962976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 13:20:41.517922 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 13:20:41.528370 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:20:41.541165 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 13:20:41.551232 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:20:41.562025 2962976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:20:41.572196 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 13:20:41.582707 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 13:20:41.593503 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 13:20:41.604600 2962976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:20:41.614626 2962976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:20:41.623508 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:20:41.714072 2962976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 13:20:41.897907 2962976 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 13:20:41.898011 2962976 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 13:20:41.902605 2962976 start.go:563] Will wait 60s for crictl version
	I0908 13:20:41.902693 2962976 ssh_runner.go:195] Run: which crictl
	I0908 13:20:41.907099 2962976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:20:41.945823 2962976 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 13:20:41.945960 2962976 ssh_runner.go:195] Run: containerd --version
	I0908 13:20:41.970995 2962976 ssh_runner.go:195] Run: containerd --version
	I0908 13:20:42.004794 2962976 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 13:20:42.019524 2962976 cli_runner.go:164] Run: docker network inspect no-preload-978911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:20:42.038523 2962976 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0908 13:20:42.042566 2962976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:20:42.054333 2962976 kubeadm.go:875] updating cluster {Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:20:42.054518 2962976 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:20:42.054568 2962976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:20:42.107352 2962976 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:20:42.107387 2962976 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:20:42.107396 2962976 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0908 13:20:42.107557 2962976 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-978911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:20:42.107646 2962976 ssh_runner.go:195] Run: sudo crictl info
	I0908 13:20:42.191420 2962976 cni.go:84] Creating CNI manager for ""
	I0908 13:20:42.191459 2962976 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:20:42.191472 2962976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:20:42.191522 2962976 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-978911 NodeName:no-preload-978911 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:20:42.191696 2962976 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-978911"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:20:42.191812 2962976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:20:42.205135 2962976 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:20:42.205221 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:20:42.217434 2962976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0908 13:20:42.248179 2962976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:20:42.275286 2962976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0908 13:20:42.308117 2962976 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:20:42.312692 2962976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:20:42.326442 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:20:42.421795 2962976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:20:42.436584 2962976 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911 for IP: 192.168.85.2
	I0908 13:20:42.436607 2962976 certs.go:194] generating shared ca certs ...
	I0908 13:20:42.436625 2962976 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:42.436807 2962976 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
	I0908 13:20:42.436928 2962976 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
	I0908 13:20:42.436943 2962976 certs.go:256] generating profile certs ...
	I0908 13:20:42.437066 2962976 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.key
	I0908 13:20:42.437162 2962976 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.key.7fd60a6c
	I0908 13:20:42.437238 2962976 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.key
	I0908 13:20:42.437393 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
	W0908 13:20:42.437445 2962976 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
	I0908 13:20:42.437460 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 13:20:42.437491 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
	I0908 13:20:42.437542 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:20:42.437581 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
	I0908 13:20:42.437641 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:20:42.438302 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:20:42.466909 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 13:20:42.494218 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:20:42.521632 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:20:42.551803 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 13:20:42.582057 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 13:20:42.614465 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:20:42.652857 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 13:20:42.682624 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
	I0908 13:20:42.708216 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:20:42.734706 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
	I0908 13:20:42.761119 2962976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:20:42.781025 2962976 ssh_runner.go:195] Run: openssl version
	I0908 13:20:42.787728 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:20:42.797410 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:20:42.801102 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:20:42.801201 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:20:42.808137 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:20:42.817759 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
	I0908 13:20:42.827168 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
	I0908 13:20:42.831084 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/2751114.pem
	I0908 13:20:42.831147 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
	I0908 13:20:42.838272 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
	I0908 13:20:42.847707 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
	I0908 13:20:42.859460 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
	I0908 13:20:42.863196 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/27511142.pem
	I0908 13:20:42.863282 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
	I0908 13:20:42.870770 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:20:42.881017 2962976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:20:42.884978 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 13:20:42.894328 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 13:20:42.901729 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 13:20:42.909115 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 13:20:42.916106 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 13:20:42.923024 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 13:20:42.930253 2962976 kubeadm.go:392] StartCluster: {Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:20:42.930382 2962976 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 13:20:42.930445 2962976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:20:42.967887 2962976 cri.go:89] found id: "bdd85fd62e90072711cf66c0db968c1136a28f624fc072df147df0fc494584c8"
	I0908 13:20:42.967912 2962976 cri.go:89] found id: "e685e05b08b51e456163cacd1644bf5fab5dd1c6118ed288241b851f6da29e62"
	I0908 13:20:42.967917 2962976 cri.go:89] found id: "aa297480aa1b27d3b15093502059992dfea640300a65451e6f3db7b6b056ed1a"
	I0908 13:20:42.967923 2962976 cri.go:89] found id: "d5252e4ac54a43b7539b2bfe24a8a0183a6b9420e5f2255895a872dd266dfbdd"
	I0908 13:20:42.967927 2962976 cri.go:89] found id: "f5e8fe9a2b29ca8f991932c0c60513abc177286d77ac00c6ac9f77de28096302"
	I0908 13:20:42.967933 2962976 cri.go:89] found id: "e59a4771913f0c586033aa2f970d5003227c9262bc5c73b7ef6007c8ab2801a0"
	I0908 13:20:42.967937 2962976 cri.go:89] found id: "453e5e825289a6e70e8cee4d4d3e9be4fa57968b9f3101e0486c55f00773e336"
	I0908 13:20:42.967962 2962976 cri.go:89] found id: "89cf83ed06352d9266afeb8d98085daf1e7cc6dfe5636d2a24ff0d4804025f62"
	I0908 13:20:42.967970 2962976 cri.go:89] found id: ""
	I0908 13:20:42.968033 2962976 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0908 13:20:42.983189 2962976 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-08T13:20:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0908 13:20:42.983277 2962976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:20:42.992272 2962976 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 13:20:42.992292 2962976 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 13:20:42.992372 2962976 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 13:20:43.001397 2962976 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 13:20:43.002879 2962976 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-978911" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:20:43.003451 2962976 kubeconfig.go:62] /home/jenkins/minikube-integration/21508-2749258/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-978911" cluster setting kubeconfig missing "no-preload-978911" context setting]
	I0908 13:20:43.004375 2962976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:43.009201 2962976 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 13:20:43.018628 2962976 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0908 13:20:43.018703 2962976 kubeadm.go:593] duration metric: took 26.405085ms to restartPrimaryControlPlane
	I0908 13:20:43.018720 2962976 kubeadm.go:394] duration metric: took 88.476669ms to StartCluster
	I0908 13:20:43.018749 2962976 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:43.018813 2962976 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:20:43.019719 2962976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:43.019921 2962976 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 13:20:43.020275 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:20:43.020343 2962976 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 13:20:43.020435 2962976 addons.go:69] Setting storage-provisioner=true in profile "no-preload-978911"
	I0908 13:20:43.020455 2962976 addons.go:238] Setting addon storage-provisioner=true in "no-preload-978911"
	W0908 13:20:43.020466 2962976 addons.go:247] addon storage-provisioner should already be in state true
	I0908 13:20:43.020459 2962976 addons.go:69] Setting dashboard=true in profile "no-preload-978911"
	I0908 13:20:43.020534 2962976 addons.go:238] Setting addon dashboard=true in "no-preload-978911"
	W0908 13:20:43.020568 2962976 addons.go:247] addon dashboard should already be in state true
	I0908 13:20:43.020608 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.020489 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.021413 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.021413 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.020494 2962976 addons.go:69] Setting metrics-server=true in profile "no-preload-978911"
	I0908 13:20:43.022090 2962976 addons.go:238] Setting addon metrics-server=true in "no-preload-978911"
	W0908 13:20:43.022105 2962976 addons.go:247] addon metrics-server should already be in state true
	I0908 13:20:43.022137 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.022666 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.020436 2962976 addons.go:69] Setting default-storageclass=true in profile "no-preload-978911"
	I0908 13:20:43.024278 2962976 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-978911"
	I0908 13:20:43.025361 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.026372 2962976 out.go:179] * Verifying Kubernetes components...
	I0908 13:20:43.030861 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:20:43.087484 2962976 addons.go:238] Setting addon default-storageclass=true in "no-preload-978911"
	W0908 13:20:43.087508 2962976 addons.go:247] addon default-storageclass should already be in state true
	I0908 13:20:43.087533 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.087950 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.102706 2962976 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 13:20:43.102748 2962976 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:20:43.102758 2962976 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 13:20:43.105724 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:20:43.105749 2962976 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:20:43.105773 2962976 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:20:43.105788 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:20:43.105823 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.105850 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.115360 2962976 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 13:20:43.118627 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 13:20:43.118650 2962976 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 13:20:43.118715 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.136790 2962976 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:20:43.136812 2962976 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:20:43.136876 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.175291 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.191796 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.193563 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.208474 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.247091 2962976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:20:43.292511 2962976 node_ready.go:35] waiting up to 6m0s for node "no-preload-978911" to be "Ready" ...
	I0908 13:20:43.382311 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:20:43.421803 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:20:43.421878 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 13:20:43.458209 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 13:20:43.458297 2962976 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 13:20:43.473379 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:20:43.552131 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:20:43.552207 2962976 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:20:43.560638 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 13:20:43.560711 2962976 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 13:20:43.624453 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:20:43.624479 2962976 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:20:43.699654 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 13:20:43.699676 2962976 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 13:20:43.771524 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0908 13:20:43.785451 2962976 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:20:43.785495 2962976 retry.go:31] will retry after 247.912555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 13:20:43.785541 2962976 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:20:43.785547 2962976 retry.go:31] will retry after 303.088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:20:43.834968 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 13:20:43.835041 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 13:20:43.961116 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 13:20:43.961144 2962976 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 13:20:44.033768 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:20:44.089120 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:20:44.174947 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 13:20:44.174976 2962976 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 13:20:44.350161 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 13:20:44.350203 2962976 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 13:20:44.457915 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 13:20:44.457956 2962976 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 13:20:44.496024 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:20:44.496054 2962976 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 13:20:44.520347 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:20:48.489246 2962976 node_ready.go:49] node "no-preload-978911" is "Ready"
	I0908 13:20:48.489276 2962976 node_ready.go:38] duration metric: took 5.196680279s for node "no-preload-978911" to be "Ready" ...
	I0908 13:20:48.489290 2962976 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:20:48.489355 2962976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:20:51.238545 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.466984527s)
	I0908 13:20:51.238585 2962976 addons.go:479] Verifying addon metrics-server=true in "no-preload-978911"
	I0908 13:20:51.400235 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.366422544s)
	I0908 13:20:51.400286 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.311136503s)
	I0908 13:20:51.400525 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.880137919s)
	I0908 13:20:51.400723 2962976 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.911337726s)
	I0908 13:20:51.400744 2962976 api_server.go:72] duration metric: took 8.380794855s to wait for apiserver process to appear ...
	I0908 13:20:51.400750 2962976 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:20:51.400766 2962976 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0908 13:20:51.403695 2962976 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-978911 addons enable metrics-server
	
	I0908 13:20:51.409743 2962976 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0908 13:20:51.411883 2962976 api_server.go:141] control plane version: v1.34.0
	I0908 13:20:51.411916 2962976 api_server.go:131] duration metric: took 11.159718ms to wait for apiserver health ...
	I0908 13:20:51.411925 2962976 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:20:51.414126 2962976 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0908 13:20:51.415529 2962976 system_pods.go:59] 9 kube-system pods found
	I0908 13:20:51.415567 2962976 system_pods.go:61] "coredns-66bc5c9577-7www8" [cb6a614e-8f35-46f4-957d-04268f222190] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:20:51.415576 2962976 system_pods.go:61] "etcd-no-preload-978911" [4e38fee5-f757-4ee8-a97f-c76e4b633559] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:20:51.415587 2962976 system_pods.go:61] "kindnet-8cc7v" [3da2f7cd-76d4-456a-8cc8-069d4c2405a6] Running
	I0908 13:20:51.415596 2962976 system_pods.go:61] "kube-apiserver-no-preload-978911" [44a03487-7993-4879-9ab7-88227004b4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:20:51.415617 2962976 system_pods.go:61] "kube-controller-manager-no-preload-978911" [7a8863ca-4835-46f7-9529-dd33b2a669f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:20:51.415626 2962976 system_pods.go:61] "kube-proxy-zb84d" [05431e58-3897-4783-899f-e079efa82e52] Running
	I0908 13:20:51.415636 2962976 system_pods.go:61] "kube-scheduler-no-preload-978911" [3d53a214-024a-4b7c-9500-23b47958a0c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:20:51.415643 2962976 system_pods.go:61] "metrics-server-746fcd58dc-vh962" [959e88f4-10f0-4c5b-98da-0451d012b212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:20:51.415653 2962976 system_pods.go:61] "storage-provisioner" [062103e0-2e60-4495-84fe-e00955426335] Running
	I0908 13:20:51.415659 2962976 system_pods.go:74] duration metric: took 3.729531ms to wait for pod list to return data ...
	I0908 13:20:51.415666 2962976 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:20:51.417731 2962976 addons.go:514] duration metric: took 8.397368128s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0908 13:20:51.418437 2962976 default_sa.go:45] found service account: "default"
	I0908 13:20:51.418459 2962976 default_sa.go:55] duration metric: took 2.786524ms for default service account to be created ...
	I0908 13:20:51.418468 2962976 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:20:51.421224 2962976 system_pods.go:86] 9 kube-system pods found
	I0908 13:20:51.421265 2962976 system_pods.go:89] "coredns-66bc5c9577-7www8" [cb6a614e-8f35-46f4-957d-04268f222190] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:20:51.421274 2962976 system_pods.go:89] "etcd-no-preload-978911" [4e38fee5-f757-4ee8-a97f-c76e4b633559] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:20:51.421281 2962976 system_pods.go:89] "kindnet-8cc7v" [3da2f7cd-76d4-456a-8cc8-069d4c2405a6] Running
	I0908 13:20:51.421293 2962976 system_pods.go:89] "kube-apiserver-no-preload-978911" [44a03487-7993-4879-9ab7-88227004b4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:20:51.421309 2962976 system_pods.go:89] "kube-controller-manager-no-preload-978911" [7a8863ca-4835-46f7-9529-dd33b2a669f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:20:51.421314 2962976 system_pods.go:89] "kube-proxy-zb84d" [05431e58-3897-4783-899f-e079efa82e52] Running
	I0908 13:20:51.421321 2962976 system_pods.go:89] "kube-scheduler-no-preload-978911" [3d53a214-024a-4b7c-9500-23b47958a0c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:20:51.421331 2962976 system_pods.go:89] "metrics-server-746fcd58dc-vh962" [959e88f4-10f0-4c5b-98da-0451d012b212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:20:51.421336 2962976 system_pods.go:89] "storage-provisioner" [062103e0-2e60-4495-84fe-e00955426335] Running
	I0908 13:20:51.421344 2962976 system_pods.go:126] duration metric: took 2.87014ms to wait for k8s-apps to be running ...
	I0908 13:20:51.421353 2962976 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:20:51.421410 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:20:51.436310 2962976 system_svc.go:56] duration metric: took 14.944353ms WaitForService to wait for kubelet
	I0908 13:20:51.436337 2962976 kubeadm.go:578] duration metric: took 8.416385263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:20:51.436356 2962976 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:20:51.439945 2962976 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 13:20:51.440017 2962976 node_conditions.go:123] node cpu capacity is 2
	I0908 13:20:51.440045 2962976 node_conditions.go:105] duration metric: took 3.683559ms to run NodePressure ...
	I0908 13:20:51.440074 2962976 start.go:241] waiting for startup goroutines ...
	I0908 13:20:51.440108 2962976 start.go:246] waiting for cluster config update ...
	I0908 13:20:51.440137 2962976 start.go:255] writing updated cluster config ...
	I0908 13:20:51.440471 2962976 ssh_runner.go:195] Run: rm -f paused
	I0908 13:20:51.443803 2962976 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:20:51.448363 2962976 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7www8" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 13:20:53.454728 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:20:55.456275 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:20:57.954301 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:20:59.954409 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:02.454336 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:04.954288 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:06.954819 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:09.453453 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:11.453986 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:13.454640 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:15.454718 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:17.953833 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:19.953875 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:21.954243 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:23.954403 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:26.453994 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	I0908 13:21:27.454550 2962976 pod_ready.go:94] pod "coredns-66bc5c9577-7www8" is "Ready"
	I0908 13:21:27.454580 2962976 pod_ready.go:86] duration metric: took 36.006192784s for pod "coredns-66bc5c9577-7www8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.457609 2962976 pod_ready.go:83] waiting for pod "etcd-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.462474 2962976 pod_ready.go:94] pod "etcd-no-preload-978911" is "Ready"
	I0908 13:21:27.462506 2962976 pod_ready.go:86] duration metric: took 4.86819ms for pod "etcd-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.464750 2962976 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.474261 2962976 pod_ready.go:94] pod "kube-apiserver-no-preload-978911" is "Ready"
	I0908 13:21:27.474285 2962976 pod_ready.go:86] duration metric: took 9.508793ms for pod "kube-apiserver-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.477518 2962976 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.657753 2962976 pod_ready.go:94] pod "kube-controller-manager-no-preload-978911" is "Ready"
	I0908 13:21:27.657786 2962976 pod_ready.go:86] duration metric: took 180.242988ms for pod "kube-controller-manager-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.853931 2962976 pod_ready.go:83] waiting for pod "kube-proxy-zb84d" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.252014 2962976 pod_ready.go:94] pod "kube-proxy-zb84d" is "Ready"
	I0908 13:21:28.252038 2962976 pod_ready.go:86] duration metric: took 398.080343ms for pod "kube-proxy-zb84d" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.451995 2962976 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.852776 2962976 pod_ready.go:94] pod "kube-scheduler-no-preload-978911" is "Ready"
	I0908 13:21:28.852805 2962976 pod_ready.go:86] duration metric: took 400.781462ms for pod "kube-scheduler-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.852820 2962976 pod_ready.go:40] duration metric: took 37.408986235s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:21:28.914752 2962976 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:21:28.918052 2962976 out.go:179] * Done! kubectl is now configured to use "no-preload-978911" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	9f55ee77b42a1       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   35541ea28d6e4       dashboard-metrics-scraper-5f989dc9cf-fhtcn
	0b74b004307ae       ba04bb24b9575       8 minutes ago       Running             storage-provisioner         2                   bbe423ed58299       storage-provisioner
	6b1f4f786b29f       1611cd07b61d5       9 minutes ago       Running             busybox                     1                   5d758b6ec40e7       busybox
	b5ae572e6c321       97e04611ad434       9 minutes ago       Running             coredns                     1                   488eef1731825       coredns-5dd5756b68-d78mw
	873d0865e41ef       b1a8c6f707935       9 minutes ago       Running             kindnet-cni                 1                   8475d2f244eb3       kindnet-hc6xz
	105ff83200e38       940f54a5bcae9       9 minutes ago       Running             kube-proxy                  1                   a2b9277fc3436       kube-proxy-5lcjb
	66645eab9b879       ba04bb24b9575       9 minutes ago       Exited              storage-provisioner         1                   bbe423ed58299       storage-provisioner
	a472dcf368c18       9cdd6470f48c8       9 minutes ago       Running             etcd                        1                   dd9040d73d36b       etcd-old-k8s-version-552634
	306f060aeefe6       46cc66ccc7c19       9 minutes ago       Running             kube-controller-manager     1                   fb5b01c16af98       kube-controller-manager-old-k8s-version-552634
	f99ef8a528998       762dce4090c5f       9 minutes ago       Running             kube-scheduler              1                   46f9a80d026b2       kube-scheduler-old-k8s-version-552634
	ad5401098ad61       00543d2fe5d71       9 minutes ago       Running             kube-apiserver              1                   5aa721eca188e       kube-apiserver-old-k8s-version-552634
	8b4252d29a3c9       1611cd07b61d5       10 minutes ago      Exited              busybox                     0                   f9cb69407935e       busybox
	887f29bb1a772       97e04611ad434       10 minutes ago      Exited              coredns                     0                   a553ca9be588b       coredns-5dd5756b68-d78mw
	9895c6c404f91       b1a8c6f707935       10 minutes ago      Exited              kindnet-cni                 0                   adf66c46e22c7       kindnet-hc6xz
	ebc5022b0aeaa       940f54a5bcae9       10 minutes ago      Exited              kube-proxy                  0                   37c9c0bd19c4a       kube-proxy-5lcjb
	12a8c02c281d2       00543d2fe5d71       11 minutes ago      Exited              kube-apiserver              0                   0747d2824c491       kube-apiserver-old-k8s-version-552634
	56c17c12d8122       762dce4090c5f       11 minutes ago      Exited              kube-scheduler              0                   3ca96ef5cfac0       kube-scheduler-old-k8s-version-552634
	86ee799990106       9cdd6470f48c8       11 minutes ago      Exited              etcd                        0                   59bfcc3d2aaac       etcd-old-k8s-version-552634
	cbd09fa5b3a5f       46cc66ccc7c19       11 minutes ago      Exited              kube-controller-manager     0                   b6c9cd52874d0       kube-controller-manager-old-k8s-version-552634
	
	
	==> containerd <==
	Sep 08 13:21:48 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:48.720503860Z" level=info msg="StartContainer for \"5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9\" returns successfully"
	Sep 08 13:21:48 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:48.743210478Z" level=info msg="shim disconnected" id=5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9 namespace=k8s.io
	Sep 08 13:21:48 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:48.743249746Z" level=warning msg="cleaning up after shim disconnected" id=5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9 namespace=k8s.io
	Sep 08 13:21:48 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:48.743287678Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:21:49 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:49.707350997Z" level=info msg="RemoveContainer for \"857cc59bb78b32d73b8ad3fab568dd8478c2b8c176843fb16b06a532aebd3f19\""
	Sep 08 13:21:49 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:21:49.717812552Z" level=info msg="RemoveContainer for \"857cc59bb78b32d73b8ad3fab568dd8478c2b8c176843fb16b06a532aebd3f19\" returns successfully"
	Sep 08 13:24:11 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:11.637646503Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 08 13:24:11 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:11.643159435Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 08 13:24:11 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:11.645226488Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 08 13:24:11 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:11.645259406Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 08 13:24:25 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:25.638199290Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:24:25 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:25.640504433Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:24:25 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:25.765136666Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:24:26 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:26.072530145Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 13:24:26 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:26.072569701Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.638891597Z" level=info msg="CreateContainer within sandbox \"35541ea28d6e4cdb4992bca1189dbb99418187d926c8d278cf72f8d44e4f8809\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.657597012Z" level=info msg="CreateContainer within sandbox \"35541ea28d6e4cdb4992bca1189dbb99418187d926c8d278cf72f8d44e4f8809\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\""
	Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.658461251Z" level=info msg="StartContainer for \"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\""
	Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.713810519Z" level=info msg="StartContainer for \"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\" returns successfully"
	Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.716770372Z" level=info msg="received exit event container_id:\"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\" id:\"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\" pid:2826 exit_status:255 exited_at:{seconds:1757337877 nanos:716474437}"
	Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.742866419Z" level=info msg="shim disconnected" id=9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e namespace=k8s.io
	Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.742903243Z" level=warning msg="cleaning up after shim disconnected" id=9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e namespace=k8s.io
	Sep 08 13:24:37 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:37.742941248Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:24:38 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:38.090288585Z" level=info msg="RemoveContainer for \"5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9\""
	Sep 08 13:24:38 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:24:38.097504912Z" level=info msg="RemoveContainer for \"5a57f510be2911337c5ddbb1da93d2a36688b86b47eba465fe6c26a7d2cb6ef9\" returns successfully"
	
	
	==> coredns [887f29bb1a772ea77ca331bb52f5acf91f88e07e5ede3c3a3a74a6959bc2d4e5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52648 - 59601 "HINFO IN 4276947130458500050.8211269930752862866. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027626934s
	
	
	==> coredns [b5ae572e6c3217fbcd8a8a6bb2451f35b959e4517116e7b1d056ad2e30ede111] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40339 - 53587 "HINFO IN 3405231602673676994.1114408218903977437. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.065848228s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-552634
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-552634
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=old-k8s-version-552634
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_16_55_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:16:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-552634
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:27:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:23:51 +0000   Mon, 08 Sep 2025 13:16:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:23:51 +0000   Mon, 08 Sep 2025 13:16:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:23:51 +0000   Mon, 08 Sep 2025 13:16:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:23:51 +0000   Mon, 08 Sep 2025 13:17:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-552634
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 18ee3a277fa24969a47806fabfc259c0
	  System UUID:                637c7b27-ceff-4552-8bc2-a5a52de7b8d9
	  Boot ID:                    9f5228b8-b58e-4b72-938a-84f5f7e9d841
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-5dd5756b68-d78mw                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-old-k8s-version-552634                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-hc6xz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-old-k8s-version-552634             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-552634    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-5lcjb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-552634             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-57f55c9bc5-ppxnd                   100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         10m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fhtcn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2x25h             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m35s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-552634 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-552634 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-552634 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             10m                    kubelet          Node old-k8s-version-552634 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                10m                    kubelet          Node old-k8s-version-552634 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-552634 event: Registered Node old-k8s-version-552634 in Controller
	  Normal  Starting                 9m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m43s (x8 over 9m43s)  kubelet          Node old-k8s-version-552634 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m43s (x8 over 9m43s)  kubelet          Node old-k8s-version-552634 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m43s (x7 over 9m43s)  kubelet          Node old-k8s-version-552634 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m25s                  node-controller  Node old-k8s-version-552634 event: Registered Node old-k8s-version-552634 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [86ee7999901068cbb878838a61a92c5a7f51e9f4bcca6f825a9580a81d698726] <==
	{"level":"info","ts":"2025-09-08T13:16:46.858909Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T13:16:46.859245Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-08T13:16:46.859325Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-08T13:16:46.860514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-09-08T13:16:46.861226Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-08T13:16:46.861291Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-08T13:16:46.862576Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-09-08T13:16:47.416142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-09-08T13:16:47.416189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-09-08T13:16:47.416218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-09-08T13:16:47.416369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-09-08T13:16:47.416482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-09-08T13:16:47.416574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-09-08T13:16:47.416667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-09-08T13:16:47.419515Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:16:47.420115Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-552634 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-08T13:16:47.420267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T13:16:47.421471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-09-08T13:16:47.421887Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:16:47.421996Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:16:47.422057Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:16:47.422206Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T13:16:47.43072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-08T13:16:47.432069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-08T13:16:47.432222Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [a472dcf368c18f5e6a5223534ab4707aba16f0f2f8f1d2a0a9e7ffbfa099c6a6] <==
	{"level":"info","ts":"2025-09-08T13:18:10.276051Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T13:18:10.276148Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T13:18:10.276473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-09-08T13:18:10.276653Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-09-08T13:18:10.276861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:18:10.277011Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:18:10.300492Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T13:18:10.305872Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-08T13:18:10.306073Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-08T13:18:10.306477Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-08T13:18:10.307823Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-08T13:18:11.239821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-08T13:18:11.240087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-08T13:18:11.24026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-09-08T13:18:11.240353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-09-08T13:18:11.240432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-09-08T13:18:11.240522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-09-08T13:18:11.240598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-09-08T13:18:11.242592Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-552634 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-08T13:18:11.242762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T13:18:11.244005Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-08T13:18:11.242819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T13:18:11.255565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-09-08T13:18:11.258399Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-08T13:18:11.258515Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:27:52 up 19:10,  0 users,  load average: 0.20, 0.88, 1.86
	Linux old-k8s-version-552634 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [873d0865e41efef1cdc5723e8effa5359186450c3fdc15bde735024a70d67f7a] <==
	I0908 13:25:46.609187       1 main.go:301] handling current node
	I0908 13:25:56.612320       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:25:56.612356       1 main.go:301] handling current node
	I0908 13:26:06.615234       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:26:06.615270       1 main.go:301] handling current node
	I0908 13:26:16.608893       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:26:16.608926       1 main.go:301] handling current node
	I0908 13:26:26.615094       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:26:26.615145       1 main.go:301] handling current node
	I0908 13:26:36.617240       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:26:36.617275       1 main.go:301] handling current node
	I0908 13:26:46.608719       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:26:46.608766       1 main.go:301] handling current node
	I0908 13:26:56.612233       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:26:56.612265       1 main.go:301] handling current node
	I0908 13:27:06.616655       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:27:06.616687       1 main.go:301] handling current node
	I0908 13:27:16.608120       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:27:16.608153       1 main.go:301] handling current node
	I0908 13:27:26.609167       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:27:26.609203       1 main.go:301] handling current node
	I0908 13:27:36.613741       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:27:36.613782       1 main.go:301] handling current node
	I0908 13:27:46.609146       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:27:46.609206       1 main.go:301] handling current node
	
	
	==> kindnet [9895c6c404f918357f3fe8f891a3b387606c5f693ab288d576f52f4f6ff3214f] <==
	I0908 13:17:10.107287       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:17:10.107858       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0908 13:17:10.208007       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:17:10.208039       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:17:10.208056       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:17:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:17:10.399790       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:17:10.399870       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:17:10.399897       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:17:10.402060       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0908 13:17:10.601314       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:17:10.601444       1 metrics.go:72] Registering metrics
	I0908 13:17:10.601636       1 controller.go:711] "Syncing nftables rules"
	I0908 13:17:20.403628       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:17:20.403684       1 main.go:301] handling current node
	I0908 13:17:30.404510       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:17:30.404697       1 main.go:301] handling current node
	I0908 13:17:40.399112       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:17:40.399145       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12a8c02c281d2079f1f0b5cb46532c15ceef81c18c7ee4d11f73a0a60044feaf] <==
	I0908 13:16:53.898716       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0908 13:16:53.921523       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0908 13:16:53.933698       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	http2: server: error reading preface from client 192.168.76.2:33400: read tcp 192.168.76.2:8443->192.168.76.2:33400: read: connection reset by peer
	I0908 13:17:06.498189       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0908 13:17:06.692514       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	W0908 13:17:48.243423       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:17:48.243491       1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 13:17:48.243950       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I0908 13:17:48.244200       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:17:48.252554       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:17:48.252623       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0908 13:17:48.252661       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0908 13:17:48.252685       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I0908 13:17:48.252695       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0908 13:17:48.417382       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.99.6.154"}
	W0908 13:17:48.439844       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:17:48.439914       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0908 13:17:48.441341       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W0908 13:17:48.455258       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:17:48.455515       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	
	
	==> kube-apiserver [ad5401098ad612028621c25bb73c63049b339fca6a311e78ef665de02be9a792] <==
	I0908 13:23:15.301653       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0908 13:23:15.301681       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0908 13:23:15.303580       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:24:13.882829       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
	I0908 13:24:13.882940       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:24:15.301711       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:24:15.301806       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 13:24:15.301817       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:24:15.303703       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:24:15.303736       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0908 13:24:15.303744       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:25:13.882276       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
	I0908 13:25:13.882306       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0908 13:26:13.882041       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
	I0908 13:26:13.882064       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:26:15.302478       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:26:15.302575       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 13:26:15.302615       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:26:15.304598       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:26:15.304621       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0908 13:26:15.304627       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:27:13.881582       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
	I0908 13:27:13.881612       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [306f060aeefe68259f7a715c7e170802f56b0889bb02eba2839a448bbe10626f] <==
	I0908 13:22:57.246892       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:23:26.785060       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:23:27.254584       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:23:56.789309       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:23:57.262703       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 13:24:24.653925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="79.03µs"
	E0908 13:24:26.794297       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:24:27.270995       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 13:24:36.666539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="107.402µs"
	I0908 13:24:38.101098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="186.604µs"
	I0908 13:24:40.651496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="102.324µs"
	I0908 13:24:47.726405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="125.092µs"
	I0908 13:24:53.650028       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.621µs"
	E0908 13:24:56.799501       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:24:57.278466       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:25:26.804593       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:25:27.286560       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:25:56.809678       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:25:57.294456       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:26:26.814787       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:26:27.303324       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:26:56.819241       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:26:57.312066       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:27:26.824102       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:27:27.328339       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [cbd09fa5b3a5f2baa29480044435939c6661de8956c0144f35b364d38a9a8c5d] <==
	I0908 13:17:06.905350       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d78mw"
	I0908 13:17:06.961512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="451.892045ms"
	I0908 13:17:06.990983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.413519ms"
	I0908 13:17:06.991131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.636µs"
	I0908 13:17:07.009558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.894µs"
	I0908 13:17:07.066198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.47µs"
	I0908 13:17:08.290007       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0908 13:17:08.327643       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-l7qhx"
	I0908 13:17:08.357540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.022947ms"
	I0908 13:17:08.370967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.380063ms"
	I0908 13:17:08.371362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.453µs"
	I0908 13:17:09.300479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.878µs"
	I0908 13:17:09.324970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.488µs"
	I0908 13:17:09.335279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.751µs"
	I0908 13:17:33.297910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.014µs"
	I0908 13:17:33.333907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.272005ms"
	I0908 13:17:33.334047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.728µs"
	I0908 13:17:48.277484       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0908 13:17:48.293942       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-57f55c9bc5-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0908 13:17:48.302667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="26.010125ms"
	E0908 13:17:48.302864       1 replica_set.go:557] sync "kube-system/metrics-server-57f55c9bc5" failed with pods "metrics-server-57f55c9bc5-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0908 13:17:48.338413       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-ppxnd"
	I0908 13:17:48.367949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="65.027384ms"
	I0908 13:17:48.389055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="20.848675ms"
	I0908 13:17:48.389382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="83.682µs"
	
	
	==> kube-proxy [105ff83200e38a12913faaecd2d0fb83a38b4d40ae898a992f24c5f0b7a7c61b] <==
	I0908 13:18:16.237529       1 server_others.go:69] "Using iptables proxy"
	I0908 13:18:16.293058       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0908 13:18:16.409028       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:18:16.410966       1 server_others.go:152] "Using iptables Proxier"
	I0908 13:18:16.411007       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0908 13:18:16.411015       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0908 13:18:16.411045       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0908 13:18:16.411252       1 server.go:846] "Version info" version="v1.28.0"
	I0908 13:18:16.411262       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:18:16.415120       1 config.go:188] "Starting service config controller"
	I0908 13:18:16.415145       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0908 13:18:16.415164       1 config.go:97] "Starting endpoint slice config controller"
	I0908 13:18:16.415168       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0908 13:18:16.415590       1 config.go:315] "Starting node config controller"
	I0908 13:18:16.415596       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0908 13:18:16.515704       1 shared_informer.go:318] Caches are synced for node config
	I0908 13:18:16.515740       1 shared_informer.go:318] Caches are synced for service config
	I0908 13:18:16.515781       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ebc5022b0aeaa3ac29b4e9ce1ac124b836e51d29870a4e127105d359fce607b3] <==
	I0908 13:17:07.582138       1 server_others.go:69] "Using iptables proxy"
	I0908 13:17:07.625942       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0908 13:17:07.698479       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:17:07.704374       1 server_others.go:152] "Using iptables Proxier"
	I0908 13:17:07.704422       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0908 13:17:07.704431       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0908 13:17:07.704471       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0908 13:17:07.706455       1 server.go:846] "Version info" version="v1.28.0"
	I0908 13:17:07.706479       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:17:07.707911       1 config.go:188] "Starting service config controller"
	I0908 13:17:07.707928       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0908 13:17:07.707946       1 config.go:97] "Starting endpoint slice config controller"
	I0908 13:17:07.707949       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0908 13:17:07.708351       1 config.go:315] "Starting node config controller"
	I0908 13:17:07.708358       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0908 13:17:07.808233       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0908 13:17:07.808322       1 shared_informer.go:318] Caches are synced for service config
	I0908 13:17:07.808601       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [56c17c12d8122dd6d365bc92de07c71d041472f11a561084d79ef44eda4e026b] <==
	W0908 13:16:50.798751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0908 13:16:50.799375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0908 13:16:50.798813       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0908 13:16:50.801342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0908 13:16:50.799293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0908 13:16:50.801760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0908 13:16:50.800539       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0908 13:16:50.801790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0908 13:16:50.801683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0908 13:16:50.801806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0908 13:16:51.675764       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0908 13:16:51.676019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0908 13:16:51.710585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0908 13:16:51.710807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0908 13:16:51.813571       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0908 13:16:51.813612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0908 13:16:51.868033       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0908 13:16:51.868069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0908 13:16:51.930076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0908 13:16:51.930623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0908 13:16:51.965948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0908 13:16:51.965986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0908 13:16:52.116484       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0908 13:16:52.116778       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0908 13:16:53.983929       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f99ef8a5289987ae2ab7840f3ca0c7298d3bca981189b327213d7ac0466ffddc] <==
	W0908 13:18:14.287661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0908 13:18:14.287683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0908 13:18:14.287802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0908 13:18:14.287821       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0908 13:18:14.287995       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0908 13:18:14.288017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0908 13:18:14.288173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0908 13:18:14.288194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0908 13:18:14.288275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0908 13:18:14.288294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0908 13:18:14.293206       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0908 13:18:14.293247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0908 13:18:14.293264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0908 13:18:14.293272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0908 13:18:14.293346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0908 13:18:14.293357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0908 13:18:14.293414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0908 13:18:14.293423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0908 13:18:14.301542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0908 13:18:14.301611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0908 13:18:14.302109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0908 13:18:14.302132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0908 13:18:14.302325       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0908 13:18:14.302353       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0908 13:18:15.862837       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 08 13:26:36 old-k8s-version-552634 kubelet[667]: E0908 13:26:36.637704     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:26:36 old-k8s-version-552634 kubelet[667]: E0908 13:26:36.638238     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:26:39 old-k8s-version-552634 kubelet[667]: I0908 13:26:39.636752     667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
	Sep 08 13:26:39 old-k8s-version-552634 kubelet[667]: E0908 13:26:39.637173     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:26:49 old-k8s-version-552634 kubelet[667]: E0908 13:26:49.636933     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:26:49 old-k8s-version-552634 kubelet[667]: E0908 13:26:49.637233     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:26:50 old-k8s-version-552634 kubelet[667]: I0908 13:26:50.636633     667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
	Sep 08 13:26:50 old-k8s-version-552634 kubelet[667]: E0908 13:26:50.637253     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:27:01 old-k8s-version-552634 kubelet[667]: E0908 13:27:01.637306     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:27:01 old-k8s-version-552634 kubelet[667]: E0908 13:27:01.638119     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:27:03 old-k8s-version-552634 kubelet[667]: I0908 13:27:03.636341     667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
	Sep 08 13:27:03 old-k8s-version-552634 kubelet[667]: E0908 13:27:03.636673     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:27:14 old-k8s-version-552634 kubelet[667]: E0908 13:27:14.637238     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:27:15 old-k8s-version-552634 kubelet[667]: I0908 13:27:15.645713     667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
	Sep 08 13:27:15 old-k8s-version-552634 kubelet[667]: E0908 13:27:15.646563     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:27:16 old-k8s-version-552634 kubelet[667]: E0908 13:27:16.638155     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:27:28 old-k8s-version-552634 kubelet[667]: E0908 13:27:28.637349     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:27:29 old-k8s-version-552634 kubelet[667]: I0908 13:27:29.636546     667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
	Sep 08 13:27:29 old-k8s-version-552634 kubelet[667]: E0908 13:27:29.636854     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:27:29 old-k8s-version-552634 kubelet[667]: E0908 13:27:29.637445     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:27:40 old-k8s-version-552634 kubelet[667]: E0908 13:27:40.636743     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:27:42 old-k8s-version-552634 kubelet[667]: I0908 13:27:42.636751     667 scope.go:117] "RemoveContainer" containerID="9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e"
	Sep 08 13:27:42 old-k8s-version-552634 kubelet[667]: E0908 13:27:42.637056     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:27:43 old-k8s-version-552634 kubelet[667]: E0908 13:27:43.637305     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:27:52 old-k8s-version-552634 kubelet[667]: E0908 13:27:52.643008     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	
	
	==> storage-provisioner [0b74b004307ae8f29b60bbbe51b55dd3ea17fad6807bb10d9fdaede541bcaa19] <==
	I0908 13:18:57.887092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 13:18:57.915119       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 13:18:57.918502       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0908 13:19:15.317463       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 13:19:15.317887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"20fbcd62-30a7-4d88-b856-ad9fb9fbe64d", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-552634_53f6aea1-7a70-4fd8-a67e-250e35f21845 became leader
	I0908 13:19:15.317956       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-552634_53f6aea1-7a70-4fd8-a67e-250e35f21845!
	I0908 13:19:15.418954       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-552634_53f6aea1-7a70-4fd8-a67e-250e35f21845!
	
	
	==> storage-provisioner [66645eab9b879074e918236fe3987ab393e3cfbf8d3bc59ea2e30b38c88ef369] <==
	I0908 13:18:15.931183       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 13:18:45.934234       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-552634 -n old-k8s-version-552634
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-552634 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-552634 describe pod metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-552634 describe pod metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h: exit status 1 (80.161182ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ppxnd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-2x25h" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-552634 describe pod metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-96c5t" [de89b47d-3928-4575-a5b7-9e358e5e0c42] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 13:21:34.195852 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:21:51.022466 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:23:47.956116 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:26:34.196081 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-978911 -n no-preload-978911
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-08 13:30:29.672892179 +0000 UTC m=+3460.008427260
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-978911 describe po kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-978911 describe po kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-96c5t
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-978911/192.168.85.2
Start Time:       Mon, 08 Sep 2025 13:20:52 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m7tdb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-m7tdb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m37s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t to no-preload-978911
Normal   Pulling    6m21s (x5 over 9m37s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m20s (x5 over 9m33s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m20s (x5 over 9m33s)   kubelet            Error: ErrImagePull
Warning  Failed     4m24s (x20 over 9m33s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m12s (x21 over 9m33s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-978911 logs kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context no-preload-978911 logs kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard: exit status 1 (109.488468ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-96c5t" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context no-preload-978911 logs kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-978911
helpers_test.go:243: (dbg) docker inspect no-preload-978911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857",
	        "Created": "2025-09-08T13:19:03.26664771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2963100,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:20:36.233443019Z",
	            "FinishedAt": "2025-09-08T13:20:35.358612725Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857/hostname",
	        "HostsPath": "/var/lib/docker/containers/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857/hosts",
	        "LogPath": "/var/lib/docker/containers/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857-json.log",
	        "Name": "/no-preload-978911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-978911:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-978911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857",
	                "LowerDir": "/var/lib/docker/overlay2/2c3702870ce1f1ebd73a74b1c7bdc7c936e7e11927b295131daadcb49cea8fe8-init/diff:/var/lib/docker/overlay2/665ff8bb3d07b8342629df037737f3667c7c59d9d1f85930dc3dfdf138460626/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c3702870ce1f1ebd73a74b1c7bdc7c936e7e11927b295131daadcb49cea8fe8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c3702870ce1f1ebd73a74b1c7bdc7c936e7e11927b295131daadcb49cea8fe8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c3702870ce1f1ebd73a74b1c7bdc7c936e7e11927b295131daadcb49cea8fe8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-978911",
	                "Source": "/var/lib/docker/volumes/no-preload-978911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-978911",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-978911",
	                "name.minikube.sigs.k8s.io": "no-preload-978911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "109bcb7306c3e12c46f6025c1a9776c0f338398c89088746355bbef941de31ab",
	            "SandboxKey": "/var/run/docker/netns/109bcb7306c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36735"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-978911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:57:c9:ef:5d:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7fb24c92d376a0dfeabdb333b420b7ec19169f584cdf4a776434be0437a8941f",
	                    "EndpointID": "b6bb37a50f7122d779eb26d93dbdd8cdbf70525a449e6a6397c6a42d17b74a11",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-978911",
	                        "24330c1b4333"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-978911 -n no-preload-978911
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-978911 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-978911 logs -n 25: (1.932647381s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ force-systemd-env-386836 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-386836 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ delete  │ -p force-systemd-env-386836                                                                                                                                                                                                                         │ force-systemd-env-386836 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ start   │ -p cert-expiration-713579 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-713579   │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ start   │ -p pause-864887 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ pause   │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ unpause │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ pause   │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ delete  │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ delete  │ -p pause-864887                                                                                                                                                                                                                                     │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ start   │ -p cert-options-480035 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:16 UTC │
	│ ssh     │ cert-options-480035 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ ssh     │ -p cert-options-480035 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ delete  │ -p cert-options-480035                                                                                                                                                                                                                              │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ start   │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-552634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:17 UTC │
	│ stop    │ -p old-k8s-version-552634 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-552634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ start   │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ start   │ -p cert-expiration-713579 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-713579   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ delete  │ -p cert-expiration-713579                                                                                                                                                                                                                           │ cert-expiration-713579   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:19 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:19 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ stop    │ -p no-preload-978911 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable dashboard -p no-preload-978911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:21 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:20:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:20:35.931338 2962976 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:20:35.931455 2962976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:20:35.931470 2962976 out.go:374] Setting ErrFile to fd 2...
	I0908 13:20:35.931478 2962976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:20:35.931739 2962976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:20:35.932098 2962976 out.go:368] Setting JSON to false
	I0908 13:20:35.933046 2962976 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":68588,"bootTime":1757269048,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 13:20:35.933113 2962976 start.go:140] virtualization:  
	I0908 13:20:35.938091 2962976 out.go:179] * [no-preload-978911] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:20:35.941259 2962976 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:20:35.941303 2962976 notify.go:220] Checking for updates...
	I0908 13:20:35.946964 2962976 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:20:35.949861 2962976 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:20:35.952715 2962976 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 13:20:35.956376 2962976 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:20:35.959333 2962976 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:20:35.962746 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:20:35.963352 2962976 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:20:35.991533 2962976 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:20:35.991638 2962976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:20:36.072292 2962976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:20:36.052728775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:20:36.072406 2962976 docker.go:318] overlay module found
	I0908 13:20:36.075656 2962976 out.go:179] * Using the docker driver based on existing profile
	I0908 13:20:36.078527 2962976 start.go:304] selected driver: docker
	I0908 13:20:36.078546 2962976 start.go:918] validating driver "docker" against &{Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:20:36.078664 2962976 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:20:36.079452 2962976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:20:36.145451 2962976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:20:36.13563539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:20:36.145819 2962976 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:20:36.145841 2962976 cni.go:84] Creating CNI manager for ""
	I0908 13:20:36.145901 2962976 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:20:36.145935 2962976 start.go:348] cluster config:
	{Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:20:36.150798 2962976 out.go:179] * Starting "no-preload-978911" primary control-plane node in "no-preload-978911" cluster
	I0908 13:20:36.153634 2962976 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:20:36.156700 2962976 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:20:36.159655 2962976 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:20:36.159871 2962976 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/config.json ...
	I0908 13:20:36.160216 2962976 cache.go:107] acquiring lock: {Name:mk9f7cd9bf685dbdd22a939bba5743203e9424b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160296 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0908 13:20:36.160304 2962976 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.209µs
	I0908 13:20:36.160319 2962976 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0908 13:20:36.159703 2962976 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:20:36.160406 2962976 cache.go:107] acquiring lock: {Name:mka44a87e995f06fac0280236e9044a05cbf0c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160447 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0908 13:20:36.160453 2962976 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0" took 50.452µs
	I0908 13:20:36.160460 2962976 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0908 13:20:36.160483 2962976 cache.go:107] acquiring lock: {Name:mkb0b6bdd176d599d5a383a38a60d5e44912d326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160512 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0908 13:20:36.160517 2962976 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0" took 35.248µs
	I0908 13:20:36.160522 2962976 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0908 13:20:36.160531 2962976 cache.go:107] acquiring lock: {Name:mk3a1846ff1d17320a61c4f0cd7f03a465580c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160557 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0908 13:20:36.160562 2962976 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0" took 31.646µs
	I0908 13:20:36.160568 2962976 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0908 13:20:36.160576 2962976 cache.go:107] acquiring lock: {Name:mk1c90c15fea0bf3c7271fb14d259c914df38d83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160600 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0908 13:20:36.160605 2962976 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0" took 29.727µs
	I0908 13:20:36.160612 2962976 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0908 13:20:36.160622 2962976 cache.go:107] acquiring lock: {Name:mke7032f762990626a62b2503bb54454bb8e4428 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160650 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I0908 13:20:36.160655 2962976 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 33.541µs
	I0908 13:20:36.160660 2962976 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0908 13:20:36.160669 2962976 cache.go:107] acquiring lock: {Name:mk8b0387706fadd68f571a10efda673c0c270d63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160693 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0908 13:20:36.160698 2962976 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.12µs
	I0908 13:20:36.160709 2962976 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0908 13:20:36.160719 2962976 cache.go:107] acquiring lock: {Name:mk33ca43d20f07ddc371c694dc9c7a9ebcb088c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160744 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0908 13:20:36.160749 2962976 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.663µs
	I0908 13:20:36.160754 2962976 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0908 13:20:36.160760 2962976 cache.go:87] Successfully saved all images to host disk.
	I0908 13:20:36.180376 2962976 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:20:36.180401 2962976 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:20:36.180415 2962976 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:20:36.180445 2962976 start.go:360] acquireMachinesLock for no-preload-978911: {Name:mk7699a0142cc873eeb1530cb26c114199650434 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.180501 2962976 start.go:364] duration metric: took 35.707µs to acquireMachinesLock for "no-preload-978911"
	I0908 13:20:36.180529 2962976 start.go:96] Skipping create...Using existing machine configuration
	I0908 13:20:36.180538 2962976 fix.go:54] fixHost starting: 
	I0908 13:20:36.180803 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:36.198068 2962976 fix.go:112] recreateIfNeeded on no-preload-978911: state=Stopped err=<nil>
	W0908 13:20:36.198097 2962976 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 13:20:36.201397 2962976 out.go:252] * Restarting existing docker container for "no-preload-978911" ...
	I0908 13:20:36.201532 2962976 cli_runner.go:164] Run: docker start no-preload-978911
	I0908 13:20:36.450658 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:36.474394 2962976 kic.go:430] container "no-preload-978911" state is running.
	I0908 13:20:36.474790 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
	I0908 13:20:36.494941 2962976 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/config.json ...
	I0908 13:20:36.495172 2962976 machine.go:93] provisionDockerMachine start ...
	I0908 13:20:36.495236 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:36.514963 2962976 main.go:141] libmachine: Using SSH client type: native
	I0908 13:20:36.515301 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36733 <nil> <nil>}
	I0908 13:20:36.515314 2962976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:20:36.515903 2962976 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55582->127.0.0.1:36733: read: connection reset by peer
	I0908 13:20:39.637915 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-978911
	
	I0908 13:20:39.637978 2962976 ubuntu.go:182] provisioning hostname "no-preload-978911"
	I0908 13:20:39.638058 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:39.656178 2962976 main.go:141] libmachine: Using SSH client type: native
	I0908 13:20:39.656497 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36733 <nil> <nil>}
	I0908 13:20:39.656514 2962976 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-978911 && echo "no-preload-978911" | sudo tee /etc/hostname
	I0908 13:20:39.795726 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-978911
	
	I0908 13:20:39.795805 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:39.814502 2962976 main.go:141] libmachine: Using SSH client type: native
	I0908 13:20:39.814810 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36733 <nil> <nil>}
	I0908 13:20:39.814835 2962976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-978911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-978911/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-978911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:20:39.950789 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:20:39.950812 2962976 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
	I0908 13:20:39.950834 2962976 ubuntu.go:190] setting up certificates
	I0908 13:20:39.950843 2962976 provision.go:84] configureAuth start
	I0908 13:20:39.950907 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
	I0908 13:20:39.972214 2962976 provision.go:143] copyHostCerts
	I0908 13:20:39.972295 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
	I0908 13:20:39.972317 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
	I0908 13:20:39.972393 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
	I0908 13:20:39.972496 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
	I0908 13:20:39.972501 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
	I0908 13:20:39.972526 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
	I0908 13:20:39.972586 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
	I0908 13:20:39.972591 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
	I0908 13:20:39.972613 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
	I0908 13:20:39.972667 2962976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.no-preload-978911 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-978911]
	I0908 13:20:40.245168 2962976 provision.go:177] copyRemoteCerts
	I0908 13:20:40.245243 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:20:40.245295 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.263254 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.355579 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 13:20:40.380935 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 13:20:40.406306 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 13:20:40.432096 2962976 provision.go:87] duration metric: took 481.231644ms to configureAuth
	I0908 13:20:40.432126 2962976 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:20:40.432326 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:20:40.432340 2962976 machine.go:96] duration metric: took 3.937157056s to provisionDockerMachine
	I0908 13:20:40.432348 2962976 start.go:293] postStartSetup for "no-preload-978911" (driver="docker")
	I0908 13:20:40.432359 2962976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:20:40.432420 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:20:40.432470 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.449780 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.539192 2962976 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:20:40.543086 2962976 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:20:40.543119 2962976 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:20:40.543129 2962976 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:20:40.543142 2962976 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:20:40.543156 2962976 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
	I0908 13:20:40.543213 2962976 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
	I0908 13:20:40.543299 2962976 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
	I0908 13:20:40.543407 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 13:20:40.552362 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:20:40.577039 2962976 start.go:296] duration metric: took 144.675775ms for postStartSetup
	I0908 13:20:40.577118 2962976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:20:40.577178 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.593852 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.684143 2962976 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:20:40.688435 2962976 fix.go:56] duration metric: took 4.507889752s for fixHost
	I0908 13:20:40.688464 2962976 start.go:83] releasing machines lock for "no-preload-978911", held for 4.507944625s
	I0908 13:20:40.688533 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
	I0908 13:20:40.705355 2962976 ssh_runner.go:195] Run: cat /version.json
	I0908 13:20:40.705419 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.705609 2962976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:20:40.705666 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.727686 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.735980 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.822021 2962976 ssh_runner.go:195] Run: systemctl --version
	I0908 13:20:40.992138 2962976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:20:40.996696 2962976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 13:20:41.017301 2962976 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:20:41.017379 2962976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:20:41.026660 2962976 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 13:20:41.026732 2962976 start.go:495] detecting cgroup driver to use...
	I0908 13:20:41.026779 2962976 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:20:41.026849 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 13:20:41.041661 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 13:20:41.053793 2962976 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:20:41.053929 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:20:41.068141 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:20:41.079992 2962976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:20:41.158894 2962976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:20:41.250698 2962976 docker.go:234] disabling docker service ...
	I0908 13:20:41.250775 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:20:41.265554 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:20:41.277916 2962976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:20:41.368244 2962976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:20:41.462413 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:20:41.475092 2962976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:20:41.493271 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 13:20:41.505845 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 13:20:41.517791 2962976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 13:20:41.517922 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 13:20:41.528370 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:20:41.541165 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 13:20:41.551232 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:20:41.562025 2962976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:20:41.572196 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 13:20:41.582707 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 13:20:41.593503 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 13:20:41.604600 2962976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:20:41.614626 2962976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:20:41.623508 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:20:41.714072 2962976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 13:20:41.897907 2962976 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 13:20:41.898011 2962976 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 13:20:41.902605 2962976 start.go:563] Will wait 60s for crictl version
	I0908 13:20:41.902693 2962976 ssh_runner.go:195] Run: which crictl
	I0908 13:20:41.907099 2962976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:20:41.945823 2962976 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 13:20:41.945960 2962976 ssh_runner.go:195] Run: containerd --version
	I0908 13:20:41.970995 2962976 ssh_runner.go:195] Run: containerd --version
	I0908 13:20:42.004794 2962976 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 13:20:42.019524 2962976 cli_runner.go:164] Run: docker network inspect no-preload-978911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:20:42.038523 2962976 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0908 13:20:42.042566 2962976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:20:42.054333 2962976 kubeadm.go:875] updating cluster {Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:20:42.054518 2962976 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:20:42.054568 2962976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:20:42.107352 2962976 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:20:42.107387 2962976 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:20:42.107396 2962976 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0908 13:20:42.107557 2962976 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-978911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:20:42.107646 2962976 ssh_runner.go:195] Run: sudo crictl info
	I0908 13:20:42.191420 2962976 cni.go:84] Creating CNI manager for ""
	I0908 13:20:42.191459 2962976 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:20:42.191472 2962976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:20:42.191522 2962976 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-978911 NodeName:no-preload-978911 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:20:42.191696 2962976 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-978911"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:20:42.191812 2962976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:20:42.205135 2962976 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:20:42.205221 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:20:42.217434 2962976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0908 13:20:42.248179 2962976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:20:42.275286 2962976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0908 13:20:42.308117 2962976 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:20:42.312692 2962976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:20:42.326442 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:20:42.421795 2962976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:20:42.436584 2962976 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911 for IP: 192.168.85.2
	I0908 13:20:42.436607 2962976 certs.go:194] generating shared ca certs ...
	I0908 13:20:42.436625 2962976 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:42.436807 2962976 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
	I0908 13:20:42.436928 2962976 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
	I0908 13:20:42.436943 2962976 certs.go:256] generating profile certs ...
	I0908 13:20:42.437066 2962976 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.key
	I0908 13:20:42.437162 2962976 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.key.7fd60a6c
	I0908 13:20:42.437238 2962976 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.key
	I0908 13:20:42.437393 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
	W0908 13:20:42.437445 2962976 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
	I0908 13:20:42.437460 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 13:20:42.437491 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
	I0908 13:20:42.437542 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:20:42.437581 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
	I0908 13:20:42.437641 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:20:42.438302 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:20:42.466909 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 13:20:42.494218 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:20:42.521632 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:20:42.551803 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 13:20:42.582057 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 13:20:42.614465 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:20:42.652857 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 13:20:42.682624 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
	I0908 13:20:42.708216 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:20:42.734706 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
	I0908 13:20:42.761119 2962976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:20:42.781025 2962976 ssh_runner.go:195] Run: openssl version
	I0908 13:20:42.787728 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:20:42.797410 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:20:42.801102 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:20:42.801201 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:20:42.808137 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:20:42.817759 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
	I0908 13:20:42.827168 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
	I0908 13:20:42.831084 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/2751114.pem
	I0908 13:20:42.831147 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
	I0908 13:20:42.838272 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
	I0908 13:20:42.847707 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
	I0908 13:20:42.859460 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
	I0908 13:20:42.863196 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/27511142.pem
	I0908 13:20:42.863282 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
	I0908 13:20:42.870770 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:20:42.881017 2962976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:20:42.884978 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 13:20:42.894328 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 13:20:42.901729 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 13:20:42.909115 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 13:20:42.916106 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 13:20:42.923024 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 13:20:42.930253 2962976 kubeadm.go:392] StartCluster: {Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:20:42.930382 2962976 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 13:20:42.930445 2962976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:20:42.967887 2962976 cri.go:89] found id: "bdd85fd62e90072711cf66c0db968c1136a28f624fc072df147df0fc494584c8"
	I0908 13:20:42.967912 2962976 cri.go:89] found id: "e685e05b08b51e456163cacd1644bf5fab5dd1c6118ed288241b851f6da29e62"
	I0908 13:20:42.967917 2962976 cri.go:89] found id: "aa297480aa1b27d3b15093502059992dfea640300a65451e6f3db7b6b056ed1a"
	I0908 13:20:42.967923 2962976 cri.go:89] found id: "d5252e4ac54a43b7539b2bfe24a8a0183a6b9420e5f2255895a872dd266dfbdd"
	I0908 13:20:42.967927 2962976 cri.go:89] found id: "f5e8fe9a2b29ca8f991932c0c60513abc177286d77ac00c6ac9f77de28096302"
	I0908 13:20:42.967933 2962976 cri.go:89] found id: "e59a4771913f0c586033aa2f970d5003227c9262bc5c73b7ef6007c8ab2801a0"
	I0908 13:20:42.967937 2962976 cri.go:89] found id: "453e5e825289a6e70e8cee4d4d3e9be4fa57968b9f3101e0486c55f00773e336"
	I0908 13:20:42.967962 2962976 cri.go:89] found id: "89cf83ed06352d9266afeb8d98085daf1e7cc6dfe5636d2a24ff0d4804025f62"
	I0908 13:20:42.967970 2962976 cri.go:89] found id: ""
	I0908 13:20:42.968033 2962976 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0908 13:20:42.983189 2962976 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-08T13:20:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0908 13:20:42.983277 2962976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:20:42.992272 2962976 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 13:20:42.992292 2962976 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 13:20:42.992372 2962976 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 13:20:43.001397 2962976 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 13:20:43.002879 2962976 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-978911" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:20:43.003451 2962976 kubeconfig.go:62] /home/jenkins/minikube-integration/21508-2749258/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-978911" cluster setting kubeconfig missing "no-preload-978911" context setting]
	I0908 13:20:43.004375 2962976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:43.009201 2962976 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 13:20:43.018628 2962976 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0908 13:20:43.018703 2962976 kubeadm.go:593] duration metric: took 26.405085ms to restartPrimaryControlPlane
	I0908 13:20:43.018720 2962976 kubeadm.go:394] duration metric: took 88.476669ms to StartCluster
	I0908 13:20:43.018749 2962976 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:43.018813 2962976 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:20:43.019719 2962976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:43.019921 2962976 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 13:20:43.020275 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:20:43.020343 2962976 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 13:20:43.020435 2962976 addons.go:69] Setting storage-provisioner=true in profile "no-preload-978911"
	I0908 13:20:43.020455 2962976 addons.go:238] Setting addon storage-provisioner=true in "no-preload-978911"
	W0908 13:20:43.020466 2962976 addons.go:247] addon storage-provisioner should already be in state true
	I0908 13:20:43.020459 2962976 addons.go:69] Setting dashboard=true in profile "no-preload-978911"
	I0908 13:20:43.020534 2962976 addons.go:238] Setting addon dashboard=true in "no-preload-978911"
	W0908 13:20:43.020568 2962976 addons.go:247] addon dashboard should already be in state true
	I0908 13:20:43.020608 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.020489 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.021413 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.021413 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.020494 2962976 addons.go:69] Setting metrics-server=true in profile "no-preload-978911"
	I0908 13:20:43.022090 2962976 addons.go:238] Setting addon metrics-server=true in "no-preload-978911"
	W0908 13:20:43.022105 2962976 addons.go:247] addon metrics-server should already be in state true
	I0908 13:20:43.022137 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.022666 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.020436 2962976 addons.go:69] Setting default-storageclass=true in profile "no-preload-978911"
	I0908 13:20:43.024278 2962976 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-978911"
	I0908 13:20:43.025361 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.026372 2962976 out.go:179] * Verifying Kubernetes components...
	I0908 13:20:43.030861 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:20:43.087484 2962976 addons.go:238] Setting addon default-storageclass=true in "no-preload-978911"
	W0908 13:20:43.087508 2962976 addons.go:247] addon default-storageclass should already be in state true
	I0908 13:20:43.087533 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.087950 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.102706 2962976 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 13:20:43.102748 2962976 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:20:43.102758 2962976 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 13:20:43.105724 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:20:43.105749 2962976 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:20:43.105773 2962976 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:20:43.105788 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:20:43.105823 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.105850 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.115360 2962976 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 13:20:43.118627 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 13:20:43.118650 2962976 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 13:20:43.118715 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.136790 2962976 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:20:43.136812 2962976 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:20:43.136876 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.175291 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.191796 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.193563 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.208474 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.247091 2962976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:20:43.292511 2962976 node_ready.go:35] waiting up to 6m0s for node "no-preload-978911" to be "Ready" ...
	I0908 13:20:43.382311 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:20:43.421803 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:20:43.421878 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 13:20:43.458209 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 13:20:43.458297 2962976 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 13:20:43.473379 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:20:43.552131 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:20:43.552207 2962976 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:20:43.560638 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 13:20:43.560711 2962976 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 13:20:43.624453 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:20:43.624479 2962976 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:20:43.699654 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 13:20:43.699676 2962976 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 13:20:43.771524 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0908 13:20:43.785451 2962976 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:20:43.785495 2962976 retry.go:31] will retry after 247.912555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 13:20:43.785541 2962976 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:20:43.785547 2962976 retry.go:31] will retry after 303.088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:20:43.834968 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 13:20:43.835041 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 13:20:43.961116 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 13:20:43.961144 2962976 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 13:20:44.033768 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:20:44.089120 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:20:44.174947 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 13:20:44.174976 2962976 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 13:20:44.350161 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 13:20:44.350203 2962976 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 13:20:44.457915 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 13:20:44.457956 2962976 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 13:20:44.496024 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:20:44.496054 2962976 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 13:20:44.520347 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:20:48.489246 2962976 node_ready.go:49] node "no-preload-978911" is "Ready"
	I0908 13:20:48.489276 2962976 node_ready.go:38] duration metric: took 5.196680279s for node "no-preload-978911" to be "Ready" ...
	I0908 13:20:48.489290 2962976 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:20:48.489355 2962976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:20:51.238545 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.466984527s)
	I0908 13:20:51.238585 2962976 addons.go:479] Verifying addon metrics-server=true in "no-preload-978911"
	I0908 13:20:51.400235 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.366422544s)
	I0908 13:20:51.400286 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.311136503s)
	I0908 13:20:51.400525 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.880137919s)
	I0908 13:20:51.400723 2962976 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.911337726s)
	I0908 13:20:51.400744 2962976 api_server.go:72] duration metric: took 8.380794855s to wait for apiserver process to appear ...
	I0908 13:20:51.400750 2962976 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:20:51.400766 2962976 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0908 13:20:51.403695 2962976 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-978911 addons enable metrics-server
	
	I0908 13:20:51.409743 2962976 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0908 13:20:51.411883 2962976 api_server.go:141] control plane version: v1.34.0
	I0908 13:20:51.411916 2962976 api_server.go:131] duration metric: took 11.159718ms to wait for apiserver health ...
	I0908 13:20:51.411925 2962976 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:20:51.414126 2962976 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0908 13:20:51.415529 2962976 system_pods.go:59] 9 kube-system pods found
	I0908 13:20:51.415567 2962976 system_pods.go:61] "coredns-66bc5c9577-7www8" [cb6a614e-8f35-46f4-957d-04268f222190] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:20:51.415576 2962976 system_pods.go:61] "etcd-no-preload-978911" [4e38fee5-f757-4ee8-a97f-c76e4b633559] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:20:51.415587 2962976 system_pods.go:61] "kindnet-8cc7v" [3da2f7cd-76d4-456a-8cc8-069d4c2405a6] Running
	I0908 13:20:51.415596 2962976 system_pods.go:61] "kube-apiserver-no-preload-978911" [44a03487-7993-4879-9ab7-88227004b4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:20:51.415617 2962976 system_pods.go:61] "kube-controller-manager-no-preload-978911" [7a8863ca-4835-46f7-9529-dd33b2a669f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:20:51.415626 2962976 system_pods.go:61] "kube-proxy-zb84d" [05431e58-3897-4783-899f-e079efa82e52] Running
	I0908 13:20:51.415636 2962976 system_pods.go:61] "kube-scheduler-no-preload-978911" [3d53a214-024a-4b7c-9500-23b47958a0c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:20:51.415643 2962976 system_pods.go:61] "metrics-server-746fcd58dc-vh962" [959e88f4-10f0-4c5b-98da-0451d012b212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:20:51.415653 2962976 system_pods.go:61] "storage-provisioner" [062103e0-2e60-4495-84fe-e00955426335] Running
	I0908 13:20:51.415659 2962976 system_pods.go:74] duration metric: took 3.729531ms to wait for pod list to return data ...
	I0908 13:20:51.415666 2962976 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:20:51.417731 2962976 addons.go:514] duration metric: took 8.397368128s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0908 13:20:51.418437 2962976 default_sa.go:45] found service account: "default"
	I0908 13:20:51.418459 2962976 default_sa.go:55] duration metric: took 2.786524ms for default service account to be created ...
	I0908 13:20:51.418468 2962976 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:20:51.421224 2962976 system_pods.go:86] 9 kube-system pods found
	I0908 13:20:51.421265 2962976 system_pods.go:89] "coredns-66bc5c9577-7www8" [cb6a614e-8f35-46f4-957d-04268f222190] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:20:51.421274 2962976 system_pods.go:89] "etcd-no-preload-978911" [4e38fee5-f757-4ee8-a97f-c76e4b633559] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:20:51.421281 2962976 system_pods.go:89] "kindnet-8cc7v" [3da2f7cd-76d4-456a-8cc8-069d4c2405a6] Running
	I0908 13:20:51.421293 2962976 system_pods.go:89] "kube-apiserver-no-preload-978911" [44a03487-7993-4879-9ab7-88227004b4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:20:51.421309 2962976 system_pods.go:89] "kube-controller-manager-no-preload-978911" [7a8863ca-4835-46f7-9529-dd33b2a669f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:20:51.421314 2962976 system_pods.go:89] "kube-proxy-zb84d" [05431e58-3897-4783-899f-e079efa82e52] Running
	I0908 13:20:51.421321 2962976 system_pods.go:89] "kube-scheduler-no-preload-978911" [3d53a214-024a-4b7c-9500-23b47958a0c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:20:51.421331 2962976 system_pods.go:89] "metrics-server-746fcd58dc-vh962" [959e88f4-10f0-4c5b-98da-0451d012b212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:20:51.421336 2962976 system_pods.go:89] "storage-provisioner" [062103e0-2e60-4495-84fe-e00955426335] Running
	I0908 13:20:51.421344 2962976 system_pods.go:126] duration metric: took 2.87014ms to wait for k8s-apps to be running ...
	I0908 13:20:51.421353 2962976 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:20:51.421410 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:20:51.436310 2962976 system_svc.go:56] duration metric: took 14.944353ms WaitForService to wait for kubelet
	I0908 13:20:51.436337 2962976 kubeadm.go:578] duration metric: took 8.416385263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:20:51.436356 2962976 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:20:51.439945 2962976 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 13:20:51.440017 2962976 node_conditions.go:123] node cpu capacity is 2
	I0908 13:20:51.440045 2962976 node_conditions.go:105] duration metric: took 3.683559ms to run NodePressure ...
	I0908 13:20:51.440074 2962976 start.go:241] waiting for startup goroutines ...
	I0908 13:20:51.440108 2962976 start.go:246] waiting for cluster config update ...
	I0908 13:20:51.440137 2962976 start.go:255] writing updated cluster config ...
	I0908 13:20:51.440471 2962976 ssh_runner.go:195] Run: rm -f paused
	I0908 13:20:51.443803 2962976 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:20:51.448363 2962976 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7www8" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 13:20:53.454728 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:20:55.456275 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:20:57.954301 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:20:59.954409 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:02.454336 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:04.954288 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:06.954819 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:09.453453 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:11.453986 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:13.454640 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:15.454718 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:17.953833 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:19.953875 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:21.954243 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:23.954403 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:26.453994 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	I0908 13:21:27.454550 2962976 pod_ready.go:94] pod "coredns-66bc5c9577-7www8" is "Ready"
	I0908 13:21:27.454580 2962976 pod_ready.go:86] duration metric: took 36.006192784s for pod "coredns-66bc5c9577-7www8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.457609 2962976 pod_ready.go:83] waiting for pod "etcd-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.462474 2962976 pod_ready.go:94] pod "etcd-no-preload-978911" is "Ready"
	I0908 13:21:27.462506 2962976 pod_ready.go:86] duration metric: took 4.86819ms for pod "etcd-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.464750 2962976 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.474261 2962976 pod_ready.go:94] pod "kube-apiserver-no-preload-978911" is "Ready"
	I0908 13:21:27.474285 2962976 pod_ready.go:86] duration metric: took 9.508793ms for pod "kube-apiserver-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.477518 2962976 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.657753 2962976 pod_ready.go:94] pod "kube-controller-manager-no-preload-978911" is "Ready"
	I0908 13:21:27.657786 2962976 pod_ready.go:86] duration metric: took 180.242988ms for pod "kube-controller-manager-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.853931 2962976 pod_ready.go:83] waiting for pod "kube-proxy-zb84d" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.252014 2962976 pod_ready.go:94] pod "kube-proxy-zb84d" is "Ready"
	I0908 13:21:28.252038 2962976 pod_ready.go:86] duration metric: took 398.080343ms for pod "kube-proxy-zb84d" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.451995 2962976 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.852776 2962976 pod_ready.go:94] pod "kube-scheduler-no-preload-978911" is "Ready"
	I0908 13:21:28.852805 2962976 pod_ready.go:86] duration metric: took 400.781462ms for pod "kube-scheduler-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.852820 2962976 pod_ready.go:40] duration metric: took 37.408986235s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:21:28.914752 2962976 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:21:28.918052 2962976 out.go:179] * Done! kubectl is now configured to use "no-preload-978911" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	932b7513f2edf       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   411da2a550179       dashboard-metrics-scraper-6ffb444bf9-fpptr
	93d2704d5a3f9       66749159455b3       8 minutes ago       Running             storage-provisioner         2                   19efbda2c0a5e       storage-provisioner
	d824c7b540ec1       138784d87c9c5       9 minutes ago       Running             coredns                     1                   6cb3e5f9141b7       coredns-66bc5c9577-7www8
	4621bfda7dec4       1611cd07b61d5       9 minutes ago       Running             busybox                     1                   f2102d5258939       busybox
	682d750821b44       b1a8c6f707935       9 minutes ago       Running             kindnet-cni                 1                   98fe16de354ae       kindnet-8cc7v
	e3598e5f85914       66749159455b3       9 minutes ago       Exited              storage-provisioner         1                   19efbda2c0a5e       storage-provisioner
	03b6d80d695b8       6fc32d66c1411       9 minutes ago       Running             kube-proxy                  1                   a3583924057c7       kube-proxy-zb84d
	b824f7e2b00ef       a1894772a478e       9 minutes ago       Running             etcd                        1                   e6c013134cb72       etcd-no-preload-978911
	5bb6d3486bdd6       a25f5ef9c34c3       9 minutes ago       Running             kube-scheduler              1                   76496f1d745fa       kube-scheduler-no-preload-978911
	3d049d3c26e41       d291939e99406       9 minutes ago       Running             kube-apiserver              1                   77e31618cee33       kube-apiserver-no-preload-978911
	ea8f24cba6f98       996be7e86d9b3       9 minutes ago       Running             kube-controller-manager     1                   4f510283fc1d4       kube-controller-manager-no-preload-978911
	b2d7f168abc96       1611cd07b61d5       10 minutes ago      Exited              busybox                     0                   e842701f59485       busybox
	bdd85fd62e900       138784d87c9c5       10 minutes ago      Exited              coredns                     0                   f6df22374ec66       coredns-66bc5c9577-7www8
	e685e05b08b51       b1a8c6f707935       10 minutes ago      Exited              kindnet-cni                 0                   dc4559b71e9bf       kindnet-8cc7v
	d5252e4ac54a4       6fc32d66c1411       10 minutes ago      Exited              kube-proxy                  0                   8af2378844944       kube-proxy-zb84d
	f5e8fe9a2b29c       a25f5ef9c34c3       11 minutes ago      Exited              kube-scheduler              0                   988dbfe721d3c       kube-scheduler-no-preload-978911
	e59a4771913f0       996be7e86d9b3       11 minutes ago      Exited              kube-controller-manager     0                   578cc7b14d6e0       kube-controller-manager-no-preload-978911
	453e5e825289a       d291939e99406       11 minutes ago      Exited              kube-apiserver              0                   73f1bd4a88515       kube-apiserver-no-preload-978911
	89cf83ed06352       a1894772a478e       11 minutes ago      Exited              etcd                        0                   cf3cbf7dab6c0       etcd-no-preload-978911
	
	
	==> containerd <==
	Sep 08 13:24:04 no-preload-978911 containerd[575]: time="2025-09-08T13:24:04.494739809Z" level=info msg="RemoveContainer for \"4fe58042bafb4b5fc59cbafdbe174483fe2712173c09dce8f42be1e16058edee\" returns successfully"
	Sep 08 13:24:08 no-preload-978911 containerd[575]: time="2025-09-08T13:24:08.639787500Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:24:08 no-preload-978911 containerd[575]: time="2025-09-08T13:24:08.642498507Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:24:08 no-preload-978911 containerd[575]: time="2025-09-08T13:24:08.799446513Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:24:09 no-preload-978911 containerd[575]: time="2025-09-08T13:24:09.104303618Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 13:24:09 no-preload-978911 containerd[575]: time="2025-09-08T13:24:09.104417667Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 08 13:26:37 no-preload-978911 containerd[575]: time="2025-09-08T13:26:37.639417713Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 08 13:26:37 no-preload-978911 containerd[575]: time="2025-09-08T13:26:37.646424273Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 08 13:26:37 no-preload-978911 containerd[575]: time="2025-09-08T13:26:37.648536626Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 08 13:26:37 no-preload-978911 containerd[575]: time="2025-09-08T13:26:37.648517656Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.639741621Z" level=info msg="CreateContainer within sandbox \"411da2a5501797ee6af497aee8ce43558927a623457b05e0f7c477289219eea7\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.660004409Z" level=info msg="CreateContainer within sandbox \"411da2a5501797ee6af497aee8ce43558927a623457b05e0f7c477289219eea7\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393\""
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.660640141Z" level=info msg="StartContainer for \"932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393\""
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.715656905Z" level=info msg="StartContainer for \"932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393\" returns successfully"
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.715884452Z" level=info msg="received exit event container_id:\"932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393\" id:\"932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393\" pid:2800 exit_status:255 exited_at:{seconds:1757338007 nanos:714822344}"
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.742935780Z" level=info msg="shim disconnected" id=932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393 namespace=k8s.io
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.742974236Z" level=warning msg="cleaning up after shim disconnected" id=932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393 namespace=k8s.io
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.743010526Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.896784696Z" level=info msg="RemoveContainer for \"9f7d1f73e8057f15f918c8a069b8547de32b5ed2f7aacef4b2795d23814bb036\""
	Sep 08 13:26:47 no-preload-978911 containerd[575]: time="2025-09-08T13:26:47.908210869Z" level=info msg="RemoveContainer for \"9f7d1f73e8057f15f918c8a069b8547de32b5ed2f7aacef4b2795d23814bb036\" returns successfully"
	Sep 08 13:26:52 no-preload-978911 containerd[575]: time="2025-09-08T13:26:52.642272045Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:26:52 no-preload-978911 containerd[575]: time="2025-09-08T13:26:52.644720314Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:26:52 no-preload-978911 containerd[575]: time="2025-09-08T13:26:52.867047180Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:26:53 no-preload-978911 containerd[575]: time="2025-09-08T13:26:53.165166585Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 13:26:53 no-preload-978911 containerd[575]: time="2025-09-08T13:26:53.165216791Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [bdd85fd62e90072711cf66c0db968c1136a28f624fc072df147df0fc494584c8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46697 - 39181 "HINFO IN 3567785162455233463.2380029076477560907. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.053623907s
	
	
	==> coredns [d824c7b540ec1bc7cbfe79da017657418f04a7cd09baf2d649873ca9006eab62] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47636 - 5170 "HINFO IN 777682767407552810.8526643665007990753. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024371508s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-978911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-978911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=no-preload-978911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_19_36_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:19:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-978911
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:30:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:25:44 +0000   Mon, 08 Sep 2025 13:19:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:25:44 +0000   Mon, 08 Sep 2025 13:19:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:25:44 +0000   Mon, 08 Sep 2025 13:19:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:25:44 +0000   Mon, 08 Sep 2025 13:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-978911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3dc0098bad94666b8110d0f8beda533
	  System UUID:                da8051b6-1408-4715-82b5-8cf1ac6dab71
	  Boot ID:                    9f5228b8-b58e-4b72-938a-84f5f7e9d841
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-7www8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-no-preload-978911                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-8cc7v                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-no-preload-978911              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-no-preload-978911     200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-zb84d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-no-preload-978911              100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-vh962               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fpptr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-96c5t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m40s                  kube-proxy       
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node no-preload-978911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node no-preload-978911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node no-preload-978911 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node no-preload-978911 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node no-preload-978911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node no-preload-978911 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                    node-controller  Node no-preload-978911 event: Registered Node no-preload-978911 in Controller
	  Normal   Starting                 9m49s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m49s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m49s (x8 over 9m49s)  kubelet          Node no-preload-978911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m49s (x8 over 9m49s)  kubelet          Node no-preload-978911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m49s (x7 over 9m49s)  kubelet          Node no-preload-978911 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m40s                  node-controller  Node no-preload-978911 event: Registered Node no-preload-978911 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [89cf83ed06352d9266afeb8d98085daf1e7cc6dfe5636d2a24ff0d4804025f62] <==
	{"level":"warn","ts":"2025-09-08T13:19:30.969407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:30.994219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.015279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.032104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.051624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.081709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.093479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.112752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.148035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.164589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.187940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.208175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.247257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.251205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.276140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.291354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.309207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.326852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.347417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.382335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.439252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.459233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.479141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.497168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.581389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	
	
	==> etcd [b824f7e2b00ef8bf7362bf6ae6b8ba0555a19feffebfcd3c008135d3ac1eb9b2] <==
	{"level":"warn","ts":"2025-09-08T13:20:46.781281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.812762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.825285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.859152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.889325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.908442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.922442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.938236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.983766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.009644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.038996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.052217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.070220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.096892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.116807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.139181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.152782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.172934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.187802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.206860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.255422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.293734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.313316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.352606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.404990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:30:31 up 19:13,  0 users,  load average: 0.83, 0.75, 1.64
	Linux no-preload-978911 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [682d750821b448b7795c3718a4b32c348bd1b033b722bd25ecbc55adc59d318c] <==
	I0908 13:28:30.616758       1 main.go:301] handling current node
	I0908 13:28:40.612719       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:28:40.612754       1 main.go:301] handling current node
	I0908 13:28:50.610646       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:28:50.610743       1 main.go:301] handling current node
	I0908 13:29:00.612299       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:29:00.612344       1 main.go:301] handling current node
	I0908 13:29:10.612153       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:29:10.612257       1 main.go:301] handling current node
	I0908 13:29:20.610445       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:29:20.610545       1 main.go:301] handling current node
	I0908 13:29:30.610409       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:29:30.610634       1 main.go:301] handling current node
	I0908 13:29:40.613467       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:29:40.613535       1 main.go:301] handling current node
	I0908 13:29:50.610516       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:29:50.610628       1 main.go:301] handling current node
	I0908 13:30:00.620990       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:30:00.621038       1 main.go:301] handling current node
	I0908 13:30:10.612941       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:30:10.612979       1 main.go:301] handling current node
	I0908 13:30:20.618309       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:30:20.618378       1 main.go:301] handling current node
	I0908 13:30:30.618439       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:30:30.618472       1 main.go:301] handling current node
	
	
	==> kindnet [e685e05b08b51e456163cacd1644bf5fab5dd1c6118ed288241b851f6da29e62] <==
	I0908 13:19:44.099861       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:19:44.100229       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0908 13:19:44.100430       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:19:44.100447       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:19:44.100465       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:19:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:19:44.400390       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:19:44.400424       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:19:44.400434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:19:44.400568       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0908 13:19:44.601353       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:19:44.601377       1 metrics.go:72] Registering metrics
	I0908 13:19:44.601466       1 controller.go:711] "Syncing nftables rules"
	I0908 13:19:54.309979       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:19:54.310044       1 main.go:301] handling current node
	I0908 13:20:04.313950       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:20:04.313990       1 main.go:301] handling current node
	I0908 13:20:14.305827       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:20:14.305903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3d049d3c26e41d0f9ed4a0f3d86102b35674f514ee9cbb2644fcfbd0f075b0f5] <==
	I0908 13:26:18.721457       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:26:49.491112       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:26:49.491176       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:26:49.491187       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:26:49.492228       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:26:49.492267       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:26:49.492279       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:27:17.421890       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:27:19.320551       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:28:17.466745       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:28:20.201405       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:28:49.492152       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:28:49.492242       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:28:49.492261       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:28:49.492461       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:28:49.492493       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:28:49.494281       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:29:30.401410       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:29:37.869114       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [453e5e825289a6e70e8cee4d4d3e9be4fa57968b9f3101e0486c55f00773e336] <==
	I0908 13:19:40.567306       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0908 13:19:40.653495       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:19:40.668225       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:19:40.717810       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E0908 13:20:22.493966       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:58200: use of closed network connection
	I0908 13:20:23.279549       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:20:23.284065       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:20:23.284120       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:20:23.284163       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0908 13:20:23.463420       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.111.134.213"}
	W0908 13:20:23.484744       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:20:23.485096       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:20:23.496408       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0908 13:20:23.512457       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:20:23.512510       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:20:23.514079       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [e59a4771913f0c586033aa2f970d5003227c9262bc5c73b7ef6007c8ab2801a0] <==
	I0908 13:19:39.626822       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 13:19:39.629457       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-978911" podCIDRs=["10.244.0.0/24"]
	I0908 13:19:39.632235       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 13:19:39.635497       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0908 13:19:39.639683       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:19:39.645980       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 13:19:39.659962       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:19:39.660512       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 13:19:39.662930       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 13:19:39.663120       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 13:19:39.663379       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 13:19:39.664014       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 13:19:39.664051       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 13:19:39.664498       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 13:19:39.664530       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 13:19:39.665035       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 13:19:39.665066       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 13:19:39.665118       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 13:19:39.665261       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 13:19:39.665405       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-978911"
	I0908 13:19:39.665492       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 13:19:39.669290       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0908 13:19:39.671950       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 13:19:39.672010       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	E0908 13:20:23.326965       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ea8f24cba6f98f3abd27cc0b37d1412932c064fa95e51d2bbe79bb10ebb23742] <==
	I0908 13:24:21.976790       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:24:51.910570       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:24:51.983405       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:25:21.915098       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:25:21.990651       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:25:51.920010       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:25:51.997730       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:26:21.924561       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:26:22.009451       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:26:51.929664       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:26:52.018709       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:27:21.934289       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:27:22.027629       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:27:51.939065       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:27:52.037155       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:28:21.943560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:28:22.045412       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:28:51.948664       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:28:52.052694       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:29:21.952947       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:29:22.061894       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:29:51.957043       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:29:52.069931       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:30:21.961485       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:30:22.077423       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [03b6d80d695b892c4255a8bd73705ce24a9e32cfb00dc95b821b0b1f0cd2322e] <==
	I0908 13:20:50.175099       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:20:50.265210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:20:50.366277       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:20:50.366329       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0908 13:20:50.366423       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:20:50.431589       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:20:50.431649       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:20:50.462832       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:20:50.463144       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:20:50.463158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:20:50.470254       1 config.go:200] "Starting service config controller"
	I0908 13:20:50.470279       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:20:50.470298       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:20:50.470302       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:20:50.471541       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:20:50.471559       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:20:50.476343       1 config.go:309] "Starting node config controller"
	I0908 13:20:50.476367       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:20:50.476375       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:20:50.571786       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:20:50.571873       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 13:20:50.572271       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d5252e4ac54a43b7539b2bfe24a8a0183a6b9420e5f2255895a872dd266dfbdd] <==
	I0908 13:19:41.469721       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:19:41.564302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:19:41.677273       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:19:41.677314       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0908 13:19:41.677410       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:19:41.735733       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:19:41.735784       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:19:41.742438       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:19:41.742776       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:19:41.742791       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:19:41.745859       1 config.go:200] "Starting service config controller"
	I0908 13:19:41.749671       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:19:41.749752       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:19:41.749759       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:19:41.749775       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:19:41.749790       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:19:41.750641       1 config.go:309] "Starting node config controller"
	I0908 13:19:41.750649       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:19:41.750656       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:19:41.850981       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:19:41.851024       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:19:41.851066       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5bb6d3486bdd61e30d737319dc82458ac9bf73e9632bce01c4788fc396dfa9e7] <==
	I0908 13:20:47.242443       1 serving.go:386] Generated self-signed cert in-memory
	W0908 13:20:48.457432       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 13:20:48.460456       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 13:20:48.460487       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 13:20:48.460509       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 13:20:48.625230       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:20:48.625262       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:20:48.654687       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:20:48.654802       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:20:48.654563       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:20:48.663649       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:20:48.767308       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f5e8fe9a2b29ca8f991932c0c60513abc177286d77ac00c6ac9f77de28096302] <==
	E0908 13:19:32.837217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 13:19:32.837418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 13:19:32.837497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:19:32.837925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 13:19:32.838028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 13:19:32.838075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:19:32.838116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 13:19:32.838133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 13:19:32.854626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:19:32.854962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:19:32.855189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 13:19:32.855562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:19:32.855540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:19:32.855742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 13:19:33.662431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:19:33.673449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:19:33.681196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 13:19:33.741281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:19:33.790626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:19:33.830089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:19:33.848593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:19:33.903870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 13:19:33.909363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 13:19:33.943886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0908 13:19:35.817652       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:29:13 no-preload-978911 kubelet[660]: I0908 13:29:13.637987     660 scope.go:117] "RemoveContainer" containerID="932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393"
	Sep 08 13:29:13 no-preload-978911 kubelet[660]: E0908 13:29:13.638169     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:29:22 no-preload-978911 kubelet[660]: E0908 13:29:22.639941     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:29:25 no-preload-978911 kubelet[660]: I0908 13:29:25.637928     660 scope.go:117] "RemoveContainer" containerID="932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393"
	Sep 08 13:29:25 no-preload-978911 kubelet[660]: E0908 13:29:25.638119     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:29:27 no-preload-978911 kubelet[660]: E0908 13:29:27.638576     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:29:33 no-preload-978911 kubelet[660]: E0908 13:29:33.638912     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:29:37 no-preload-978911 kubelet[660]: I0908 13:29:37.637667     660 scope.go:117] "RemoveContainer" containerID="932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393"
	Sep 08 13:29:37 no-preload-978911 kubelet[660]: E0908 13:29:37.637863     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:29:40 no-preload-978911 kubelet[660]: E0908 13:29:40.641165     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:29:48 no-preload-978911 kubelet[660]: E0908 13:29:48.640583     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:29:51 no-preload-978911 kubelet[660]: I0908 13:29:51.638145     660 scope.go:117] "RemoveContainer" containerID="932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393"
	Sep 08 13:29:51 no-preload-978911 kubelet[660]: E0908 13:29:51.638336     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:29:52 no-preload-978911 kubelet[660]: E0908 13:29:52.641029     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:29:59 no-preload-978911 kubelet[660]: E0908 13:29:59.638640     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:30:03 no-preload-978911 kubelet[660]: I0908 13:30:03.637741     660 scope.go:117] "RemoveContainer" containerID="932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393"
	Sep 08 13:30:03 no-preload-978911 kubelet[660]: E0908 13:30:03.637935     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:30:06 no-preload-978911 kubelet[660]: E0908 13:30:06.639781     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:30:14 no-preload-978911 kubelet[660]: E0908 13:30:14.640351     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:30:15 no-preload-978911 kubelet[660]: I0908 13:30:15.637571     660 scope.go:117] "RemoveContainer" containerID="932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393"
	Sep 08 13:30:15 no-preload-978911 kubelet[660]: E0908 13:30:15.637773     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:30:21 no-preload-978911 kubelet[660]: E0908 13:30:21.639347     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:30:29 no-preload-978911 kubelet[660]: I0908 13:30:29.639457     660 scope.go:117] "RemoveContainer" containerID="932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393"
	Sep 08 13:30:29 no-preload-978911 kubelet[660]: E0908 13:30:29.640478     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:30:29 no-preload-978911 kubelet[660]: E0908 13:30:29.640446     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	
	
	==> storage-provisioner [93d2704d5a3f99f28ddbee6e156a03effcbeea7e70c0bbe5b78ec6fd44e67b40] <==
	W0908 13:30:07.610930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:09.614576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:09.621545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:11.625089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:11.629632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:13.633163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:13.640861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:15.644382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:15.649284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:17.653253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:17.658083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:19.661174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:19.667982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:21.671234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:21.675702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:23.678838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:23.685530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:25.688824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:25.698009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:27.701869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:27.709035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:29.712300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:29.719198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:31.722236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:30:31.727748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e3598e5f859140c77eac8e1a536c27af9dacfad772d78e969b9b77d58a1b1b4b] <==
	I0908 13:20:50.099375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 13:21:20.104169       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-978911 -n no-preload-978911
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-978911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-vh962 kubernetes-dashboard-855c9754f9-96c5t
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-978911 describe pod metrics-server-746fcd58dc-vh962 kubernetes-dashboard-855c9754f9-96c5t
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-978911 describe pod metrics-server-746fcd58dc-vh962 kubernetes-dashboard-855c9754f9-96c5t: exit status 1 (80.070469ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-vh962" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-96c5t" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-978911 describe pod metrics-server-746fcd58dc-vh962 kubernetes-dashboard-855c9754f9-96c5t: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2x25h" [834daeba-b747-4a9b-92f8-8b6002a56239] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 13:28:47.956983 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-552634 -n old-k8s-version-552634
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-08 13:36:53.846479522 +0000 UTC m=+3844.182014603
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-552634 describe po kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context old-k8s-version-552634 describe po kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-2x25h
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-552634/192.168.76.2
Start Time:       Mon, 08 Sep 2025 13:18:26 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-94n7n (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-94n7n:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h to old-k8s-version-552634
Normal   Pulling    16m (x4 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     16m (x4 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     16m (x4 over 18m)     kubelet            Error: ErrImagePull
Warning  Failed     16m (x6 over 18m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    3m15s (x64 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-552634 logs kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-552634 logs kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard: exit status 1 (111.718627ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-2x25h" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context old-k8s-version-552634 logs kubernetes-dashboard-8694d4445c-2x25h -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-552634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-552634
helpers_test.go:243: (dbg) docker inspect old-k8s-version-552634:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5",
	        "Created": "2025-09-08T13:16:31.223099842Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2954522,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:18:01.325654388Z",
	            "FinishedAt": "2025-09-08T13:18:00.352577855Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/hostname",
	        "HostsPath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/hosts",
	        "LogPath": "/var/lib/docker/containers/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5/35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5-json.log",
	        "Name": "/old-k8s-version-552634",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-552634:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-552634",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "35ad4f8124aaae1a4a21b979855600a48ae34cde05e8c0f08f6657ae7a4f6bc5",
	                "LowerDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5-init/diff:/var/lib/docker/overlay2/665ff8bb3d07b8342629df037737f3667c7c59d9d1f85930dc3dfdf138460626/diff",
	                "MergedDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/423c34de6203abe7390e0ed2ae24e951b4c28c72668f1d0a19312091aedfbdf5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-552634",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-552634/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-552634",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-552634",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-552634",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1054ee47c483a1428606e5542d6cd92f987e38d7fe61c05d8f0b2f04b8c0d12a",
	            "SandboxKey": "/var/run/docker/netns/1054ee47c483",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36723"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36724"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36727"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36725"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36726"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-552634": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:bc:34:57:9c:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31390a5008f77087656c6be660b8e10e100917da7426cbc36a65283e697b2fb6",
	                    "EndpointID": "cb35276bdbd42ef51d3288d1ebaf44c4e25b96a9f437d50b2f35153c806c1498",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-552634",
	                        "35ad4f8124aa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-552634 -n old-k8s-version-552634
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-552634 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-552634 logs -n 25: (1.966694957s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ force-systemd-env-386836 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-386836 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ delete  │ -p force-systemd-env-386836                                                                                                                                                                                                                         │ force-systemd-env-386836 │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ start   │ -p cert-expiration-713579 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-713579   │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ start   │ -p pause-864887 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ pause   │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ unpause │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ pause   │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ delete  │ -p pause-864887 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ delete  │ -p pause-864887                                                                                                                                                                                                                                     │ pause-864887             │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:15 UTC │
	│ start   │ -p cert-options-480035 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:15 UTC │ 08 Sep 25 13:16 UTC │
	│ ssh     │ cert-options-480035 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ ssh     │ -p cert-options-480035 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ delete  │ -p cert-options-480035                                                                                                                                                                                                                              │ cert-options-480035      │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ start   │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-552634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:17 UTC │
	│ stop    │ -p old-k8s-version-552634 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-552634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ start   │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ start   │ -p cert-expiration-713579 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-713579   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ delete  │ -p cert-expiration-713579                                                                                                                                                                                                                           │ cert-expiration-713579   │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:19 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:19 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ stop    │ -p no-preload-978911 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable dashboard -p no-preload-978911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-978911        │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:21 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:20:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:20:35.931338 2962976 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:20:35.931455 2962976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:20:35.931470 2962976 out.go:374] Setting ErrFile to fd 2...
	I0908 13:20:35.931478 2962976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:20:35.931739 2962976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:20:35.932098 2962976 out.go:368] Setting JSON to false
	I0908 13:20:35.933046 2962976 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":68588,"bootTime":1757269048,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 13:20:35.933113 2962976 start.go:140] virtualization:  
	I0908 13:20:35.938091 2962976 out.go:179] * [no-preload-978911] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:20:35.941259 2962976 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:20:35.941303 2962976 notify.go:220] Checking for updates...
	I0908 13:20:35.946964 2962976 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:20:35.949861 2962976 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:20:35.952715 2962976 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 13:20:35.956376 2962976 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:20:35.959333 2962976 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:20:35.962746 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:20:35.963352 2962976 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:20:35.991533 2962976 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:20:35.991638 2962976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:20:36.072292 2962976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:20:36.052728775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:20:36.072406 2962976 docker.go:318] overlay module found
	I0908 13:20:36.075656 2962976 out.go:179] * Using the docker driver based on existing profile
	I0908 13:20:36.078527 2962976 start.go:304] selected driver: docker
	I0908 13:20:36.078546 2962976 start.go:918] validating driver "docker" against &{Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:20:36.078664 2962976 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:20:36.079452 2962976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:20:36.145451 2962976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:20:36.13563539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:20:36.145819 2962976 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:20:36.145841 2962976 cni.go:84] Creating CNI manager for ""
	I0908 13:20:36.145901 2962976 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:20:36.145935 2962976 start.go:348] cluster config:
	{Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:20:36.150798 2962976 out.go:179] * Starting "no-preload-978911" primary control-plane node in "no-preload-978911" cluster
	I0908 13:20:36.153634 2962976 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:20:36.156700 2962976 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:20:36.159655 2962976 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:20:36.159871 2962976 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/config.json ...
	I0908 13:20:36.160216 2962976 cache.go:107] acquiring lock: {Name:mk9f7cd9bf685dbdd22a939bba5743203e9424b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160296 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0908 13:20:36.160304 2962976 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.209µs
	I0908 13:20:36.160319 2962976 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0908 13:20:36.159703 2962976 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:20:36.160406 2962976 cache.go:107] acquiring lock: {Name:mka44a87e995f06fac0280236e9044a05cbf0c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160447 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0908 13:20:36.160453 2962976 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0" took 50.452µs
	I0908 13:20:36.160460 2962976 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0908 13:20:36.160483 2962976 cache.go:107] acquiring lock: {Name:mkb0b6bdd176d599d5a383a38a60d5e44912d326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160512 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0908 13:20:36.160517 2962976 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0" took 35.248µs
	I0908 13:20:36.160522 2962976 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0908 13:20:36.160531 2962976 cache.go:107] acquiring lock: {Name:mk3a1846ff1d17320a61c4f0cd7f03a465580c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160557 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0908 13:20:36.160562 2962976 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0" took 31.646µs
	I0908 13:20:36.160568 2962976 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0908 13:20:36.160576 2962976 cache.go:107] acquiring lock: {Name:mk1c90c15fea0bf3c7271fb14d259c914df38d83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160600 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0908 13:20:36.160605 2962976 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0" took 29.727µs
	I0908 13:20:36.160612 2962976 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0908 13:20:36.160622 2962976 cache.go:107] acquiring lock: {Name:mke7032f762990626a62b2503bb54454bb8e4428 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160650 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I0908 13:20:36.160655 2962976 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 33.541µs
	I0908 13:20:36.160660 2962976 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0908 13:20:36.160669 2962976 cache.go:107] acquiring lock: {Name:mk8b0387706fadd68f571a10efda673c0c270d63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160693 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0908 13:20:36.160698 2962976 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.12µs
	I0908 13:20:36.160709 2962976 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0908 13:20:36.160719 2962976 cache.go:107] acquiring lock: {Name:mk33ca43d20f07ddc371c694dc9c7a9ebcb088c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.160744 2962976 cache.go:115] /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0908 13:20:36.160749 2962976 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.663µs
	I0908 13:20:36.160754 2962976 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0908 13:20:36.160760 2962976 cache.go:87] Successfully saved all images to host disk.
	I0908 13:20:36.180376 2962976 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:20:36.180401 2962976 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:20:36.180415 2962976 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:20:36.180445 2962976 start.go:360] acquireMachinesLock for no-preload-978911: {Name:mk7699a0142cc873eeb1530cb26c114199650434 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:20:36.180501 2962976 start.go:364] duration metric: took 35.707µs to acquireMachinesLock for "no-preload-978911"
	I0908 13:20:36.180529 2962976 start.go:96] Skipping create...Using existing machine configuration
	I0908 13:20:36.180538 2962976 fix.go:54] fixHost starting: 
	I0908 13:20:36.180803 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:36.198068 2962976 fix.go:112] recreateIfNeeded on no-preload-978911: state=Stopped err=<nil>
	W0908 13:20:36.198097 2962976 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 13:20:36.201397 2962976 out.go:252] * Restarting existing docker container for "no-preload-978911" ...
	I0908 13:20:36.201532 2962976 cli_runner.go:164] Run: docker start no-preload-978911
	I0908 13:20:36.450658 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:36.474394 2962976 kic.go:430] container "no-preload-978911" state is running.
	I0908 13:20:36.474790 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
	I0908 13:20:36.494941 2962976 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/config.json ...
	I0908 13:20:36.495172 2962976 machine.go:93] provisionDockerMachine start ...
	I0908 13:20:36.495236 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:36.514963 2962976 main.go:141] libmachine: Using SSH client type: native
	I0908 13:20:36.515301 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36733 <nil> <nil>}
	I0908 13:20:36.515314 2962976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:20:36.515903 2962976 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55582->127.0.0.1:36733: read: connection reset by peer
	I0908 13:20:39.637915 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-978911
	
	I0908 13:20:39.637978 2962976 ubuntu.go:182] provisioning hostname "no-preload-978911"
	I0908 13:20:39.638058 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:39.656178 2962976 main.go:141] libmachine: Using SSH client type: native
	I0908 13:20:39.656497 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36733 <nil> <nil>}
	I0908 13:20:39.656514 2962976 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-978911 && echo "no-preload-978911" | sudo tee /etc/hostname
	I0908 13:20:39.795726 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-978911
	
	I0908 13:20:39.795805 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:39.814502 2962976 main.go:141] libmachine: Using SSH client type: native
	I0908 13:20:39.814810 2962976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36733 <nil> <nil>}
	I0908 13:20:39.814835 2962976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-978911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-978911/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-978911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:20:39.950789 2962976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:20:39.950812 2962976 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
	I0908 13:20:39.950834 2962976 ubuntu.go:190] setting up certificates
	I0908 13:20:39.950843 2962976 provision.go:84] configureAuth start
	I0908 13:20:39.950907 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
	I0908 13:20:39.972214 2962976 provision.go:143] copyHostCerts
	I0908 13:20:39.972295 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
	I0908 13:20:39.972317 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
	I0908 13:20:39.972393 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
	I0908 13:20:39.972496 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
	I0908 13:20:39.972501 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
	I0908 13:20:39.972526 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
	I0908 13:20:39.972586 2962976 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
	I0908 13:20:39.972591 2962976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
	I0908 13:20:39.972613 2962976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
	I0908 13:20:39.972667 2962976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.no-preload-978911 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-978911]
	I0908 13:20:40.245168 2962976 provision.go:177] copyRemoteCerts
	I0908 13:20:40.245243 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:20:40.245295 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.263254 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.355579 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 13:20:40.380935 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 13:20:40.406306 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 13:20:40.432096 2962976 provision.go:87] duration metric: took 481.231644ms to configureAuth
	I0908 13:20:40.432126 2962976 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:20:40.432326 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:20:40.432340 2962976 machine.go:96] duration metric: took 3.937157056s to provisionDockerMachine
	I0908 13:20:40.432348 2962976 start.go:293] postStartSetup for "no-preload-978911" (driver="docker")
	I0908 13:20:40.432359 2962976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:20:40.432420 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:20:40.432470 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.449780 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.539192 2962976 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:20:40.543086 2962976 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:20:40.543119 2962976 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:20:40.543129 2962976 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:20:40.543142 2962976 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:20:40.543156 2962976 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
	I0908 13:20:40.543213 2962976 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
	I0908 13:20:40.543299 2962976 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
	I0908 13:20:40.543407 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 13:20:40.552362 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:20:40.577039 2962976 start.go:296] duration metric: took 144.675775ms for postStartSetup
	I0908 13:20:40.577118 2962976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:20:40.577178 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.593852 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.684143 2962976 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:20:40.688435 2962976 fix.go:56] duration metric: took 4.507889752s for fixHost
	I0908 13:20:40.688464 2962976 start.go:83] releasing machines lock for "no-preload-978911", held for 4.507944625s
	I0908 13:20:40.688533 2962976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978911
	I0908 13:20:40.705355 2962976 ssh_runner.go:195] Run: cat /version.json
	I0908 13:20:40.705419 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.705609 2962976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:20:40.705666 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:40.727686 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.735980 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:40.822021 2962976 ssh_runner.go:195] Run: systemctl --version
	I0908 13:20:40.992138 2962976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:20:40.996696 2962976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 13:20:41.017301 2962976 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:20:41.017379 2962976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:20:41.026660 2962976 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 13:20:41.026732 2962976 start.go:495] detecting cgroup driver to use...
	I0908 13:20:41.026779 2962976 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:20:41.026849 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 13:20:41.041661 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 13:20:41.053793 2962976 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:20:41.053929 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:20:41.068141 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:20:41.079992 2962976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:20:41.158894 2962976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:20:41.250698 2962976 docker.go:234] disabling docker service ...
	I0908 13:20:41.250775 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:20:41.265554 2962976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:20:41.277916 2962976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:20:41.368244 2962976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:20:41.462413 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:20:41.475092 2962976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:20:41.493271 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 13:20:41.505845 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 13:20:41.517791 2962976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 13:20:41.517922 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 13:20:41.528370 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:20:41.541165 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 13:20:41.551232 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:20:41.562025 2962976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:20:41.572196 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 13:20:41.582707 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 13:20:41.593503 2962976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 13:20:41.604600 2962976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:20:41.614626 2962976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:20:41.623508 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:20:41.714072 2962976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 13:20:41.897907 2962976 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 13:20:41.898011 2962976 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 13:20:41.902605 2962976 start.go:563] Will wait 60s for crictl version
	I0908 13:20:41.902693 2962976 ssh_runner.go:195] Run: which crictl
	I0908 13:20:41.907099 2962976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:20:41.945823 2962976 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 13:20:41.945960 2962976 ssh_runner.go:195] Run: containerd --version
	I0908 13:20:41.970995 2962976 ssh_runner.go:195] Run: containerd --version
	I0908 13:20:42.004794 2962976 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 13:20:42.019524 2962976 cli_runner.go:164] Run: docker network inspect no-preload-978911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:20:42.038523 2962976 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0908 13:20:42.042566 2962976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:20:42.054333 2962976 kubeadm.go:875] updating cluster {Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:20:42.054518 2962976 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:20:42.054568 2962976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:20:42.107352 2962976 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:20:42.107387 2962976 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:20:42.107396 2962976 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0908 13:20:42.107557 2962976 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-978911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:20:42.107646 2962976 ssh_runner.go:195] Run: sudo crictl info
	I0908 13:20:42.191420 2962976 cni.go:84] Creating CNI manager for ""
	I0908 13:20:42.191459 2962976 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:20:42.191472 2962976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:20:42.191522 2962976 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-978911 NodeName:no-preload-978911 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:20:42.191696 2962976 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-978911"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:20:42.191812 2962976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:20:42.205135 2962976 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:20:42.205221 2962976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:20:42.217434 2962976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0908 13:20:42.248179 2962976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:20:42.275286 2962976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0908 13:20:42.308117 2962976 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:20:42.312692 2962976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:20:42.326442 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:20:42.421795 2962976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:20:42.436584 2962976 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911 for IP: 192.168.85.2
	I0908 13:20:42.436607 2962976 certs.go:194] generating shared ca certs ...
	I0908 13:20:42.436625 2962976 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:42.436807 2962976 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
	I0908 13:20:42.436928 2962976 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
	I0908 13:20:42.436943 2962976 certs.go:256] generating profile certs ...
	I0908 13:20:42.437066 2962976 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.key
	I0908 13:20:42.437162 2962976 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.key.7fd60a6c
	I0908 13:20:42.437238 2962976 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.key
	I0908 13:20:42.437393 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
	W0908 13:20:42.437445 2962976 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
	I0908 13:20:42.437460 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 13:20:42.437491 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
	I0908 13:20:42.437542 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:20:42.437581 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
	I0908 13:20:42.437641 2962976 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:20:42.438302 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:20:42.466909 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 13:20:42.494218 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:20:42.521632 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:20:42.551803 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 13:20:42.582057 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 13:20:42.614465 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:20:42.652857 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 13:20:42.682624 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
	I0908 13:20:42.708216 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:20:42.734706 2962976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
	I0908 13:20:42.761119 2962976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:20:42.781025 2962976 ssh_runner.go:195] Run: openssl version
	I0908 13:20:42.787728 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:20:42.797410 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:20:42.801102 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:20:42.801201 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:20:42.808137 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:20:42.817759 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
	I0908 13:20:42.827168 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
	I0908 13:20:42.831084 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/2751114.pem
	I0908 13:20:42.831147 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
	I0908 13:20:42.838272 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
	I0908 13:20:42.847707 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
	I0908 13:20:42.859460 2962976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
	I0908 13:20:42.863196 2962976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/27511142.pem
	I0908 13:20:42.863282 2962976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
	I0908 13:20:42.870770 2962976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:20:42.881017 2962976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:20:42.884978 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 13:20:42.894328 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 13:20:42.901729 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 13:20:42.909115 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 13:20:42.916106 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 13:20:42.923024 2962976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 13:20:42.930253 2962976 kubeadm.go:392] StartCluster: {Name:no-preload-978911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-978911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:20:42.930382 2962976 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 13:20:42.930445 2962976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:20:42.967887 2962976 cri.go:89] found id: "bdd85fd62e90072711cf66c0db968c1136a28f624fc072df147df0fc494584c8"
	I0908 13:20:42.967912 2962976 cri.go:89] found id: "e685e05b08b51e456163cacd1644bf5fab5dd1c6118ed288241b851f6da29e62"
	I0908 13:20:42.967917 2962976 cri.go:89] found id: "aa297480aa1b27d3b15093502059992dfea640300a65451e6f3db7b6b056ed1a"
	I0908 13:20:42.967923 2962976 cri.go:89] found id: "d5252e4ac54a43b7539b2bfe24a8a0183a6b9420e5f2255895a872dd266dfbdd"
	I0908 13:20:42.967927 2962976 cri.go:89] found id: "f5e8fe9a2b29ca8f991932c0c60513abc177286d77ac00c6ac9f77de28096302"
	I0908 13:20:42.967933 2962976 cri.go:89] found id: "e59a4771913f0c586033aa2f970d5003227c9262bc5c73b7ef6007c8ab2801a0"
	I0908 13:20:42.967937 2962976 cri.go:89] found id: "453e5e825289a6e70e8cee4d4d3e9be4fa57968b9f3101e0486c55f00773e336"
	I0908 13:20:42.967962 2962976 cri.go:89] found id: "89cf83ed06352d9266afeb8d98085daf1e7cc6dfe5636d2a24ff0d4804025f62"
	I0908 13:20:42.967970 2962976 cri.go:89] found id: ""
	I0908 13:20:42.968033 2962976 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0908 13:20:42.983189 2962976 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-08T13:20:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0908 13:20:42.983277 2962976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:20:42.992272 2962976 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 13:20:42.992292 2962976 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 13:20:42.992372 2962976 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 13:20:43.001397 2962976 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 13:20:43.002879 2962976 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-978911" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:20:43.003451 2962976 kubeconfig.go:62] /home/jenkins/minikube-integration/21508-2749258/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-978911" cluster setting kubeconfig missing "no-preload-978911" context setting]
	I0908 13:20:43.004375 2962976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:43.009201 2962976 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 13:20:43.018628 2962976 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0908 13:20:43.018703 2962976 kubeadm.go:593] duration metric: took 26.405085ms to restartPrimaryControlPlane
	I0908 13:20:43.018720 2962976 kubeadm.go:394] duration metric: took 88.476669ms to StartCluster
	I0908 13:20:43.018749 2962976 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:43.018813 2962976 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:20:43.019719 2962976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:20:43.019921 2962976 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 13:20:43.020275 2962976 config.go:182] Loaded profile config "no-preload-978911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:20:43.020343 2962976 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 13:20:43.020435 2962976 addons.go:69] Setting storage-provisioner=true in profile "no-preload-978911"
	I0908 13:20:43.020455 2962976 addons.go:238] Setting addon storage-provisioner=true in "no-preload-978911"
	W0908 13:20:43.020466 2962976 addons.go:247] addon storage-provisioner should already be in state true
	I0908 13:20:43.020459 2962976 addons.go:69] Setting dashboard=true in profile "no-preload-978911"
	I0908 13:20:43.020534 2962976 addons.go:238] Setting addon dashboard=true in "no-preload-978911"
	W0908 13:20:43.020568 2962976 addons.go:247] addon dashboard should already be in state true
	I0908 13:20:43.020608 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.020489 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.021413 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.021413 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.020494 2962976 addons.go:69] Setting metrics-server=true in profile "no-preload-978911"
	I0908 13:20:43.022090 2962976 addons.go:238] Setting addon metrics-server=true in "no-preload-978911"
	W0908 13:20:43.022105 2962976 addons.go:247] addon metrics-server should already be in state true
	I0908 13:20:43.022137 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.022666 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.020436 2962976 addons.go:69] Setting default-storageclass=true in profile "no-preload-978911"
	I0908 13:20:43.024278 2962976 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-978911"
	I0908 13:20:43.025361 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.026372 2962976 out.go:179] * Verifying Kubernetes components...
	I0908 13:20:43.030861 2962976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:20:43.087484 2962976 addons.go:238] Setting addon default-storageclass=true in "no-preload-978911"
	W0908 13:20:43.087508 2962976 addons.go:247] addon default-storageclass should already be in state true
	I0908 13:20:43.087533 2962976 host.go:66] Checking if "no-preload-978911" exists ...
	I0908 13:20:43.087950 2962976 cli_runner.go:164] Run: docker container inspect no-preload-978911 --format={{.State.Status}}
	I0908 13:20:43.102706 2962976 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 13:20:43.102748 2962976 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:20:43.102758 2962976 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 13:20:43.105724 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:20:43.105749 2962976 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:20:43.105773 2962976 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:20:43.105788 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:20:43.105823 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.105850 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.115360 2962976 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 13:20:43.118627 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 13:20:43.118650 2962976 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 13:20:43.118715 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.136790 2962976 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:20:43.136812 2962976 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:20:43.136876 2962976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978911
	I0908 13:20:43.175291 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.191796 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.193563 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.208474 2962976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36733 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/no-preload-978911/id_rsa Username:docker}
	I0908 13:20:43.247091 2962976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:20:43.292511 2962976 node_ready.go:35] waiting up to 6m0s for node "no-preload-978911" to be "Ready" ...
	I0908 13:20:43.382311 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:20:43.421803 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:20:43.421878 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 13:20:43.458209 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 13:20:43.458297 2962976 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 13:20:43.473379 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:20:43.552131 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:20:43.552207 2962976 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:20:43.560638 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 13:20:43.560711 2962976 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 13:20:43.624453 2962976 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:20:43.624479 2962976 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:20:43.699654 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 13:20:43.699676 2962976 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 13:20:43.771524 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0908 13:20:43.785451 2962976 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:20:43.785495 2962976 retry.go:31] will retry after 247.912555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 13:20:43.785541 2962976 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:20:43.785547 2962976 retry.go:31] will retry after 303.088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:20:43.834968 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 13:20:43.835041 2962976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 13:20:43.961116 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 13:20:43.961144 2962976 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 13:20:44.033768 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:20:44.089120 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:20:44.174947 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 13:20:44.174976 2962976 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 13:20:44.350161 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 13:20:44.350203 2962976 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 13:20:44.457915 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 13:20:44.457956 2962976 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 13:20:44.496024 2962976 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:20:44.496054 2962976 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 13:20:44.520347 2962976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:20:48.489246 2962976 node_ready.go:49] node "no-preload-978911" is "Ready"
	I0908 13:20:48.489276 2962976 node_ready.go:38] duration metric: took 5.196680279s for node "no-preload-978911" to be "Ready" ...
	I0908 13:20:48.489290 2962976 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:20:48.489355 2962976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:20:51.238545 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.466984527s)
	I0908 13:20:51.238585 2962976 addons.go:479] Verifying addon metrics-server=true in "no-preload-978911"
	I0908 13:20:51.400235 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.366422544s)
	I0908 13:20:51.400286 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.311136503s)
	I0908 13:20:51.400525 2962976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.880137919s)
	I0908 13:20:51.400723 2962976 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.911337726s)
	I0908 13:20:51.400744 2962976 api_server.go:72] duration metric: took 8.380794855s to wait for apiserver process to appear ...
	I0908 13:20:51.400750 2962976 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:20:51.400766 2962976 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0908 13:20:51.403695 2962976 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-978911 addons enable metrics-server
	
	I0908 13:20:51.409743 2962976 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0908 13:20:51.411883 2962976 api_server.go:141] control plane version: v1.34.0
	I0908 13:20:51.411916 2962976 api_server.go:131] duration metric: took 11.159718ms to wait for apiserver health ...
	I0908 13:20:51.411925 2962976 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:20:51.414126 2962976 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0908 13:20:51.415529 2962976 system_pods.go:59] 9 kube-system pods found
	I0908 13:20:51.415567 2962976 system_pods.go:61] "coredns-66bc5c9577-7www8" [cb6a614e-8f35-46f4-957d-04268f222190] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:20:51.415576 2962976 system_pods.go:61] "etcd-no-preload-978911" [4e38fee5-f757-4ee8-a97f-c76e4b633559] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:20:51.415587 2962976 system_pods.go:61] "kindnet-8cc7v" [3da2f7cd-76d4-456a-8cc8-069d4c2405a6] Running
	I0908 13:20:51.415596 2962976 system_pods.go:61] "kube-apiserver-no-preload-978911" [44a03487-7993-4879-9ab7-88227004b4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:20:51.415617 2962976 system_pods.go:61] "kube-controller-manager-no-preload-978911" [7a8863ca-4835-46f7-9529-dd33b2a669f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:20:51.415626 2962976 system_pods.go:61] "kube-proxy-zb84d" [05431e58-3897-4783-899f-e079efa82e52] Running
	I0908 13:20:51.415636 2962976 system_pods.go:61] "kube-scheduler-no-preload-978911" [3d53a214-024a-4b7c-9500-23b47958a0c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:20:51.415643 2962976 system_pods.go:61] "metrics-server-746fcd58dc-vh962" [959e88f4-10f0-4c5b-98da-0451d012b212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:20:51.415653 2962976 system_pods.go:61] "storage-provisioner" [062103e0-2e60-4495-84fe-e00955426335] Running
	I0908 13:20:51.415659 2962976 system_pods.go:74] duration metric: took 3.729531ms to wait for pod list to return data ...
	I0908 13:20:51.415666 2962976 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:20:51.417731 2962976 addons.go:514] duration metric: took 8.397368128s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0908 13:20:51.418437 2962976 default_sa.go:45] found service account: "default"
	I0908 13:20:51.418459 2962976 default_sa.go:55] duration metric: took 2.786524ms for default service account to be created ...
	I0908 13:20:51.418468 2962976 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:20:51.421224 2962976 system_pods.go:86] 9 kube-system pods found
	I0908 13:20:51.421265 2962976 system_pods.go:89] "coredns-66bc5c9577-7www8" [cb6a614e-8f35-46f4-957d-04268f222190] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:20:51.421274 2962976 system_pods.go:89] "etcd-no-preload-978911" [4e38fee5-f757-4ee8-a97f-c76e4b633559] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:20:51.421281 2962976 system_pods.go:89] "kindnet-8cc7v" [3da2f7cd-76d4-456a-8cc8-069d4c2405a6] Running
	I0908 13:20:51.421293 2962976 system_pods.go:89] "kube-apiserver-no-preload-978911" [44a03487-7993-4879-9ab7-88227004b4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:20:51.421309 2962976 system_pods.go:89] "kube-controller-manager-no-preload-978911" [7a8863ca-4835-46f7-9529-dd33b2a669f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:20:51.421314 2962976 system_pods.go:89] "kube-proxy-zb84d" [05431e58-3897-4783-899f-e079efa82e52] Running
	I0908 13:20:51.421321 2962976 system_pods.go:89] "kube-scheduler-no-preload-978911" [3d53a214-024a-4b7c-9500-23b47958a0c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:20:51.421331 2962976 system_pods.go:89] "metrics-server-746fcd58dc-vh962" [959e88f4-10f0-4c5b-98da-0451d012b212] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:20:51.421336 2962976 system_pods.go:89] "storage-provisioner" [062103e0-2e60-4495-84fe-e00955426335] Running
	I0908 13:20:51.421344 2962976 system_pods.go:126] duration metric: took 2.87014ms to wait for k8s-apps to be running ...
	I0908 13:20:51.421353 2962976 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:20:51.421410 2962976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:20:51.436310 2962976 system_svc.go:56] duration metric: took 14.944353ms WaitForService to wait for kubelet
	I0908 13:20:51.436337 2962976 kubeadm.go:578] duration metric: took 8.416385263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:20:51.436356 2962976 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:20:51.439945 2962976 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 13:20:51.440017 2962976 node_conditions.go:123] node cpu capacity is 2
	I0908 13:20:51.440045 2962976 node_conditions.go:105] duration metric: took 3.683559ms to run NodePressure ...
	I0908 13:20:51.440074 2962976 start.go:241] waiting for startup goroutines ...
	I0908 13:20:51.440108 2962976 start.go:246] waiting for cluster config update ...
	I0908 13:20:51.440137 2962976 start.go:255] writing updated cluster config ...
	I0908 13:20:51.440471 2962976 ssh_runner.go:195] Run: rm -f paused
	I0908 13:20:51.443803 2962976 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:20:51.448363 2962976 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7www8" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 13:20:53.454728 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:20:55.456275 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:20:57.954301 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:20:59.954409 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:02.454336 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:04.954288 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:06.954819 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:09.453453 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:11.453986 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:13.454640 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:15.454718 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:17.953833 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:19.953875 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:21.954243 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:23.954403 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	W0908 13:21:26.453994 2962976 pod_ready.go:104] pod "coredns-66bc5c9577-7www8" is not "Ready", error: <nil>
	I0908 13:21:27.454550 2962976 pod_ready.go:94] pod "coredns-66bc5c9577-7www8" is "Ready"
	I0908 13:21:27.454580 2962976 pod_ready.go:86] duration metric: took 36.006192784s for pod "coredns-66bc5c9577-7www8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.457609 2962976 pod_ready.go:83] waiting for pod "etcd-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.462474 2962976 pod_ready.go:94] pod "etcd-no-preload-978911" is "Ready"
	I0908 13:21:27.462506 2962976 pod_ready.go:86] duration metric: took 4.86819ms for pod "etcd-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.464750 2962976 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.474261 2962976 pod_ready.go:94] pod "kube-apiserver-no-preload-978911" is "Ready"
	I0908 13:21:27.474285 2962976 pod_ready.go:86] duration metric: took 9.508793ms for pod "kube-apiserver-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.477518 2962976 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.657753 2962976 pod_ready.go:94] pod "kube-controller-manager-no-preload-978911" is "Ready"
	I0908 13:21:27.657786 2962976 pod_ready.go:86] duration metric: took 180.242988ms for pod "kube-controller-manager-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:27.853931 2962976 pod_ready.go:83] waiting for pod "kube-proxy-zb84d" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.252014 2962976 pod_ready.go:94] pod "kube-proxy-zb84d" is "Ready"
	I0908 13:21:28.252038 2962976 pod_ready.go:86] duration metric: took 398.080343ms for pod "kube-proxy-zb84d" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.451995 2962976 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.852776 2962976 pod_ready.go:94] pod "kube-scheduler-no-preload-978911" is "Ready"
	I0908 13:21:28.852805 2962976 pod_ready.go:86] duration metric: took 400.781462ms for pod "kube-scheduler-no-preload-978911" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:21:28.852820 2962976 pod_ready.go:40] duration metric: took 37.408986235s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:21:28.914752 2962976 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:21:28.918052 2962976 out.go:179] * Done! kubectl is now configured to use "no-preload-978911" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	c6c71f6a40e94       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   8                   35541ea28d6e4       dashboard-metrics-scraper-5f989dc9cf-fhtcn
	0b74b004307ae       ba04bb24b9575       17 minutes ago       Running             storage-provisioner         2                   bbe423ed58299       storage-provisioner
	6b1f4f786b29f       1611cd07b61d5       18 minutes ago       Running             busybox                     1                   5d758b6ec40e7       busybox
	b5ae572e6c321       97e04611ad434       18 minutes ago       Running             coredns                     1                   488eef1731825       coredns-5dd5756b68-d78mw
	873d0865e41ef       b1a8c6f707935       18 minutes ago       Running             kindnet-cni                 1                   8475d2f244eb3       kindnet-hc6xz
	105ff83200e38       940f54a5bcae9       18 minutes ago       Running             kube-proxy                  1                   a2b9277fc3436       kube-proxy-5lcjb
	66645eab9b879       ba04bb24b9575       18 minutes ago       Exited              storage-provisioner         1                   bbe423ed58299       storage-provisioner
	a472dcf368c18       9cdd6470f48c8       18 minutes ago       Running             etcd                        1                   dd9040d73d36b       etcd-old-k8s-version-552634
	306f060aeefe6       46cc66ccc7c19       18 minutes ago       Running             kube-controller-manager     1                   fb5b01c16af98       kube-controller-manager-old-k8s-version-552634
	f99ef8a528998       762dce4090c5f       18 minutes ago       Running             kube-scheduler              1                   46f9a80d026b2       kube-scheduler-old-k8s-version-552634
	ad5401098ad61       00543d2fe5d71       18 minutes ago       Running             kube-apiserver              1                   5aa721eca188e       kube-apiserver-old-k8s-version-552634
	8b4252d29a3c9       1611cd07b61d5       19 minutes ago       Exited              busybox                     0                   f9cb69407935e       busybox
	887f29bb1a772       97e04611ad434       19 minutes ago       Exited              coredns                     0                   a553ca9be588b       coredns-5dd5756b68-d78mw
	9895c6c404f91       b1a8c6f707935       19 minutes ago       Exited              kindnet-cni                 0                   adf66c46e22c7       kindnet-hc6xz
	ebc5022b0aeaa       940f54a5bcae9       19 minutes ago       Exited              kube-proxy                  0                   37c9c0bd19c4a       kube-proxy-5lcjb
	12a8c02c281d2       00543d2fe5d71       20 minutes ago       Exited              kube-apiserver              0                   0747d2824c491       kube-apiserver-old-k8s-version-552634
	56c17c12d8122       762dce4090c5f       20 minutes ago       Exited              kube-scheduler              0                   3ca96ef5cfac0       kube-scheduler-old-k8s-version-552634
	86ee799990106       9cdd6470f48c8       20 minutes ago       Exited              etcd                        0                   59bfcc3d2aaac       etcd-old-k8s-version-552634
	cbd09fa5b3a5f       46cc66ccc7c19       20 minutes ago       Exited              kube-controller-manager     0                   b6c9cd52874d0       kube-controller-manager-old-k8s-version-552634
	
	
	==> containerd <==
	Sep 08 13:29:44 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:29:44.720193227Z" level=info msg="received exit event container_id:\"7c6b3898c34b308387005b87b3525c36ebfbad3a5a92d0e69e69f6813fc070fa\" id:\"7c6b3898c34b308387005b87b3525c36ebfbad3a5a92d0e69e69f6813fc070fa\" pid:3212 exit_status:255 exited_at:{seconds:1757338184 nanos:719902044}"
	Sep 08 13:29:44 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:29:44.743288364Z" level=info msg="shim disconnected" id=7c6b3898c34b308387005b87b3525c36ebfbad3a5a92d0e69e69f6813fc070fa namespace=k8s.io
	Sep 08 13:29:44 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:29:44.743328125Z" level=warning msg="cleaning up after shim disconnected" id=7c6b3898c34b308387005b87b3525c36ebfbad3a5a92d0e69e69f6813fc070fa namespace=k8s.io
	Sep 08 13:29:44 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:29:44.743363439Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:29:44 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:29:44.858810341Z" level=info msg="RemoveContainer for \"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\""
	Sep 08 13:29:44 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:29:44.865737388Z" level=info msg="RemoveContainer for \"9f55ee77b42a140b697d9b9dcde6008f0264de7d344fbd7d53c296baf783771e\" returns successfully"
	Sep 08 13:34:31 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:31.637263461Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 08 13:34:31 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:31.657112040Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 08 13:34:31 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:31.659246751Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 08 13:34:31 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:31.659261315Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 08 13:34:40 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:40.640199206Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:34:40 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:40.642524566Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:34:40 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:40.781537747Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:34:41 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:41.061240453Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 13:34:41 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:41.061279672Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 08 13:34:56 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:56.641605049Z" level=info msg="CreateContainer within sandbox \"35541ea28d6e4cdb4992bca1189dbb99418187d926c8d278cf72f8d44e4f8809\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Sep 08 13:34:56 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:56.659807742Z" level=info msg="CreateContainer within sandbox \"35541ea28d6e4cdb4992bca1189dbb99418187d926c8d278cf72f8d44e4f8809\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0\""
	Sep 08 13:34:56 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:56.660700123Z" level=info msg="StartContainer for \"c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0\""
	Sep 08 13:34:56 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:56.716269578Z" level=info msg="StartContainer for \"c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0\" returns successfully"
	Sep 08 13:34:56 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:56.716430017Z" level=info msg="received exit event container_id:\"c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0\" id:\"c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0\" pid:3322 exit_status:255 exited_at:{seconds:1757338496 nanos:714806152}"
	Sep 08 13:34:56 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:56.738719793Z" level=info msg="shim disconnected" id=c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0 namespace=k8s.io
	Sep 08 13:34:56 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:56.738764330Z" level=warning msg="cleaning up after shim disconnected" id=c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0 namespace=k8s.io
	Sep 08 13:34:56 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:56.738800735Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:34:57 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:57.618201950Z" level=info msg="RemoveContainer for \"7c6b3898c34b308387005b87b3525c36ebfbad3a5a92d0e69e69f6813fc070fa\""
	Sep 08 13:34:57 old-k8s-version-552634 containerd[574]: time="2025-09-08T13:34:57.625326432Z" level=info msg="RemoveContainer for \"7c6b3898c34b308387005b87b3525c36ebfbad3a5a92d0e69e69f6813fc070fa\" returns successfully"
	
	
	==> coredns [887f29bb1a772ea77ca331bb52f5acf91f88e07e5ede3c3a3a74a6959bc2d4e5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52648 - 59601 "HINFO IN 4276947130458500050.8211269930752862866. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027626934s
	
	
	==> coredns [b5ae572e6c3217fbcd8a8a6bb2451f35b959e4517116e7b1d056ad2e30ede111] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40339 - 53587 "HINFO IN 3405231602673676994.1114408218903977437. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.065848228s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-552634
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-552634
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=old-k8s-version-552634
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_16_55_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:16:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-552634
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:36:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:34:05 +0000   Mon, 08 Sep 2025 13:16:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:34:05 +0000   Mon, 08 Sep 2025 13:16:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:34:05 +0000   Mon, 08 Sep 2025 13:16:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:34:05 +0000   Mon, 08 Sep 2025 13:17:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-552634
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 18ee3a277fa24969a47806fabfc259c0
	  System UUID:                637c7b27-ceff-4552-8bc2-a5a52de7b8d9
	  Boot ID:                    9f5228b8-b58e-4b72-938a-84f5f7e9d841
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-d78mw                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-old-k8s-version-552634                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         20m
	  kube-system                 kindnet-hc6xz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-552634             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-old-k8s-version-552634    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-5lcjb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-552634             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-57f55c9bc5-ppxnd                   100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         19m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fhtcn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2x25h             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node old-k8s-version-552634 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node old-k8s-version-552634 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node old-k8s-version-552634 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             20m                kubelet          Node old-k8s-version-552634 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-552634 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-552634 event: Registered Node old-k8s-version-552634 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-552634 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-552634 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node old-k8s-version-552634 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-552634 event: Registered Node old-k8s-version-552634 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [86ee7999901068cbb878838a61a92c5a7f51e9f4bcca6f825a9580a81d698726] <==
	{"level":"info","ts":"2025-09-08T13:16:46.858909Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T13:16:46.859245Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-08T13:16:46.859325Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-08T13:16:46.860514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-09-08T13:16:46.861226Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-08T13:16:46.861291Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-08T13:16:46.862576Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-09-08T13:16:47.416142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-09-08T13:16:47.416189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-09-08T13:16:47.416218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-09-08T13:16:47.416369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-09-08T13:16:47.416482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-09-08T13:16:47.416574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-09-08T13:16:47.416667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-09-08T13:16:47.419515Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:16:47.420115Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-552634 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-08T13:16:47.420267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T13:16:47.421471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-09-08T13:16:47.421887Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:16:47.421996Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:16:47.422057Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T13:16:47.422206Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T13:16:47.43072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-08T13:16:47.432069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-08T13:16:47.432222Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [a472dcf368c18f5e6a5223534ab4707aba16f0f2f8f1d2a0a9e7ffbfa099c6a6] <==
	{"level":"info","ts":"2025-09-08T13:18:10.306073Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-08T13:18:10.306477Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-08T13:18:10.307823Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-08T13:18:11.239821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-08T13:18:11.240087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-08T13:18:11.24026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-09-08T13:18:11.240353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-09-08T13:18:11.240432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-09-08T13:18:11.240522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-09-08T13:18:11.240598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-09-08T13:18:11.242592Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-552634 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-08T13:18:11.242762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T13:18:11.244005Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-08T13:18:11.242819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T13:18:11.255565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-09-08T13:18:11.258399Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-08T13:18:11.258515Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-08T13:28:11.752103Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":972}
	{"level":"info","ts":"2025-09-08T13:28:11.888146Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":972,"took":"135.795808ms","hash":1587590370}
	{"level":"info","ts":"2025-09-08T13:28:11.888199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1587590370,"revision":972,"compact-revision":-1}
	{"level":"warn","ts":"2025-09-08T13:28:11.888831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.014964ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638355159511420113 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1220 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1034 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-08T13:28:11.889603Z","caller":"traceutil/trace.go:171","msg":"trace[402090051] transaction","detail":"{read_only:false; response_revision:1222; number_of_response:1; }","duration":"104.590165ms","start":"2025-09-08T13:28:11.784301Z","end":"2025-09-08T13:28:11.888891Z","steps":["trace[402090051] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; req_size:1104; } (duration: 102.831739ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:33:11.758737Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1221}
	{"level":"info","ts":"2025-09-08T13:33:11.759693Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1221,"took":"695.563µs","hash":1442849184}
	{"level":"info","ts":"2025-09-08T13:33:11.759721Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1442849184,"revision":1221,"compact-revision":972}
	
	
	==> kernel <==
	 13:36:55 up 19:19,  0 users,  load average: 0.75, 0.52, 1.20
	Linux old-k8s-version-552634 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [873d0865e41efef1cdc5723e8effa5359186450c3fdc15bde735024a70d67f7a] <==
	I0908 13:34:46.608391       1 main.go:301] handling current node
	I0908 13:34:56.608361       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:34:56.608396       1 main.go:301] handling current node
	I0908 13:35:06.608868       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:35:06.608915       1 main.go:301] handling current node
	I0908 13:35:16.608218       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:35:16.608433       1 main.go:301] handling current node
	I0908 13:35:26.616658       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:35:26.616692       1 main.go:301] handling current node
	I0908 13:35:36.610451       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:35:36.610487       1 main.go:301] handling current node
	I0908 13:35:46.609328       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:35:46.609363       1 main.go:301] handling current node
	I0908 13:35:56.616338       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:35:56.616376       1 main.go:301] handling current node
	I0908 13:36:06.611368       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:36:06.611428       1 main.go:301] handling current node
	I0908 13:36:16.616661       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:36:16.616753       1 main.go:301] handling current node
	I0908 13:36:26.616669       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:36:26.616762       1 main.go:301] handling current node
	I0908 13:36:36.608477       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:36:36.608510       1 main.go:301] handling current node
	I0908 13:36:46.617284       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:36:46.617317       1 main.go:301] handling current node
	
	
	==> kindnet [9895c6c404f918357f3fe8f891a3b387606c5f693ab288d576f52f4f6ff3214f] <==
	I0908 13:17:10.107287       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:17:10.107858       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0908 13:17:10.208007       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:17:10.208039       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:17:10.208056       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:17:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:17:10.399790       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:17:10.399870       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:17:10.399897       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:17:10.402060       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0908 13:17:10.601314       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:17:10.601444       1 metrics.go:72] Registering metrics
	I0908 13:17:10.601636       1 controller.go:711] "Syncing nftables rules"
	I0908 13:17:20.403628       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:17:20.403684       1 main.go:301] handling current node
	I0908 13:17:30.404510       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:17:30.404697       1 main.go:301] handling current node
	I0908 13:17:40.399112       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:17:40.399145       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12a8c02c281d2079f1f0b5cb46532c15ceef81c18c7ee4d11f73a0a60044feaf] <==
	I0908 13:16:53.898716       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0908 13:16:53.921523       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0908 13:16:53.933698       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	http2: server: error reading preface from client 192.168.76.2:33400: read tcp 192.168.76.2:8443->192.168.76.2:33400: read: connection reset by peer
	I0908 13:17:06.498189       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0908 13:17:06.692514       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	W0908 13:17:48.243423       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:17:48.243491       1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 13:17:48.243950       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I0908 13:17:48.244200       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:17:48.252554       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:17:48.252623       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0908 13:17:48.252661       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0908 13:17:48.252685       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I0908 13:17:48.252695       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0908 13:17:48.417382       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.99.6.154"}
	W0908 13:17:48.439844       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:17:48.439914       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0908 13:17:48.441341       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W0908 13:17:48.455258       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:17:48.455515       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	
	
	==> kube-apiserver [ad5401098ad612028621c25bb73c63049b339fca6a311e78ef665de02be9a792] <==
	I0908 13:33:15.313661       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:33:15.313598       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:33:15.313766       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 13:33:15.314703       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:34:13.881868       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
	I0908 13:34:13.881893       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:34:15.314672       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:34:15.314711       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0908 13:34:15.314720       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:34:15.314820       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:34:15.314980       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 13:34:15.315799       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:35:13.881381       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
	I0908 13:35:13.881412       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0908 13:36:13.881873       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.6.154:443: connect: connection refused
	I0908 13:36:13.882099       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:36:15.314810       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:36:15.314849       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0908 13:36:15.314856       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:36:15.316179       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 13:36:15.316292       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 13:36:15.316305       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [306f060aeefe68259f7a715c7e170802f56b0889bb02eba2839a448bbe10626f] <==
	I0908 13:31:57.404480       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:32:26.876203       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:32:27.423915       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:32:56.885709       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:32:57.432773       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:33:26.891139       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:33:27.440983       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:33:56.896222       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:33:57.449004       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:34:26.901940       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:34:27.456926       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 13:34:44.655235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="120.842µs"
	I0908 13:34:54.651559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.958µs"
	E0908 13:34:56.906316       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:34:57.464457       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 13:34:57.631743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.766µs"
	I0908 13:34:57.655126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="198.846µs"
	I0908 13:34:58.635500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.502µs"
	I0908 13:35:07.649700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.832µs"
	E0908 13:35:26.910956       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:35:27.472768       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:35:56.915853       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:35:57.480123       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 13:36:26.922173       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 13:36:27.488352       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [cbd09fa5b3a5f2baa29480044435939c6661de8956c0144f35b364d38a9a8c5d] <==
	I0908 13:17:06.905350       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d78mw"
	I0908 13:17:06.961512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="451.892045ms"
	I0908 13:17:06.990983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.413519ms"
	I0908 13:17:06.991131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.636µs"
	I0908 13:17:07.009558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.894µs"
	I0908 13:17:07.066198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.47µs"
	I0908 13:17:08.290007       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0908 13:17:08.327643       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-l7qhx"
	I0908 13:17:08.357540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.022947ms"
	I0908 13:17:08.370967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.380063ms"
	I0908 13:17:08.371362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.453µs"
	I0908 13:17:09.300479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.878µs"
	I0908 13:17:09.324970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.488µs"
	I0908 13:17:09.335279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.751µs"
	I0908 13:17:33.297910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.014µs"
	I0908 13:17:33.333907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.272005ms"
	I0908 13:17:33.334047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.728µs"
	I0908 13:17:48.277484       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0908 13:17:48.293942       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-57f55c9bc5-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0908 13:17:48.302667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="26.010125ms"
	E0908 13:17:48.302864       1 replica_set.go:557] sync "kube-system/metrics-server-57f55c9bc5" failed with pods "metrics-server-57f55c9bc5-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0908 13:17:48.338413       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-ppxnd"
	I0908 13:17:48.367949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="65.027384ms"
	I0908 13:17:48.389055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="20.848675ms"
	I0908 13:17:48.389382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="83.682µs"
	
	
	==> kube-proxy [105ff83200e38a12913faaecd2d0fb83a38b4d40ae898a992f24c5f0b7a7c61b] <==
	I0908 13:18:16.237529       1 server_others.go:69] "Using iptables proxy"
	I0908 13:18:16.293058       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0908 13:18:16.409028       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:18:16.410966       1 server_others.go:152] "Using iptables Proxier"
	I0908 13:18:16.411007       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0908 13:18:16.411015       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0908 13:18:16.411045       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0908 13:18:16.411252       1 server.go:846] "Version info" version="v1.28.0"
	I0908 13:18:16.411262       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:18:16.415120       1 config.go:188] "Starting service config controller"
	I0908 13:18:16.415145       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0908 13:18:16.415164       1 config.go:97] "Starting endpoint slice config controller"
	I0908 13:18:16.415168       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0908 13:18:16.415590       1 config.go:315] "Starting node config controller"
	I0908 13:18:16.415596       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0908 13:18:16.515704       1 shared_informer.go:318] Caches are synced for node config
	I0908 13:18:16.515740       1 shared_informer.go:318] Caches are synced for service config
	I0908 13:18:16.515781       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ebc5022b0aeaa3ac29b4e9ce1ac124b836e51d29870a4e127105d359fce607b3] <==
	I0908 13:17:07.582138       1 server_others.go:69] "Using iptables proxy"
	I0908 13:17:07.625942       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0908 13:17:07.698479       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:17:07.704374       1 server_others.go:152] "Using iptables Proxier"
	I0908 13:17:07.704422       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0908 13:17:07.704431       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0908 13:17:07.704471       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0908 13:17:07.706455       1 server.go:846] "Version info" version="v1.28.0"
	I0908 13:17:07.706479       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:17:07.707911       1 config.go:188] "Starting service config controller"
	I0908 13:17:07.707928       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0908 13:17:07.707946       1 config.go:97] "Starting endpoint slice config controller"
	I0908 13:17:07.707949       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0908 13:17:07.708351       1 config.go:315] "Starting node config controller"
	I0908 13:17:07.708358       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0908 13:17:07.808233       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0908 13:17:07.808322       1 shared_informer.go:318] Caches are synced for service config
	I0908 13:17:07.808601       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [56c17c12d8122dd6d365bc92de07c71d041472f11a561084d79ef44eda4e026b] <==
	W0908 13:16:50.798751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0908 13:16:50.799375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0908 13:16:50.798813       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0908 13:16:50.801342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0908 13:16:50.799293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0908 13:16:50.801760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0908 13:16:50.800539       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0908 13:16:50.801790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0908 13:16:50.801683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0908 13:16:50.801806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0908 13:16:51.675764       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0908 13:16:51.676019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0908 13:16:51.710585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0908 13:16:51.710807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0908 13:16:51.813571       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0908 13:16:51.813612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0908 13:16:51.868033       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0908 13:16:51.868069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0908 13:16:51.930076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0908 13:16:51.930623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0908 13:16:51.965948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0908 13:16:51.965986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0908 13:16:52.116484       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0908 13:16:52.116778       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0908 13:16:53.983929       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f99ef8a5289987ae2ab7840f3ca0c7298d3bca981189b327213d7ac0466ffddc] <==
	W0908 13:18:14.287661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0908 13:18:14.287683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0908 13:18:14.287802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0908 13:18:14.287821       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0908 13:18:14.287995       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0908 13:18:14.288017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0908 13:18:14.288173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0908 13:18:14.288194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0908 13:18:14.288275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0908 13:18:14.288294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0908 13:18:14.293206       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0908 13:18:14.293247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0908 13:18:14.293264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0908 13:18:14.293272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0908 13:18:14.293346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0908 13:18:14.293357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0908 13:18:14.293414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0908 13:18:14.293423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0908 13:18:14.301542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0908 13:18:14.301611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0908 13:18:14.302109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0908 13:18:14.302132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0908 13:18:14.302325       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0908 13:18:14.302353       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0908 13:18:15.862837       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 08 13:35:36 old-k8s-version-552634 kubelet[667]: E0908 13:35:36.637337     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:35:46 old-k8s-version-552634 kubelet[667]: I0908 13:35:46.636460     667 scope.go:117] "RemoveContainer" containerID="c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0"
	Sep 08 13:35:46 old-k8s-version-552634 kubelet[667]: E0908 13:35:46.636789     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:35:47 old-k8s-version-552634 kubelet[667]: E0908 13:35:47.636583     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:35:47 old-k8s-version-552634 kubelet[667]: E0908 13:35:47.636966     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:35:58 old-k8s-version-552634 kubelet[667]: I0908 13:35:58.636848     667 scope.go:117] "RemoveContainer" containerID="c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0"
	Sep 08 13:35:58 old-k8s-version-552634 kubelet[667]: E0908 13:35:58.637606     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:35:58 old-k8s-version-552634 kubelet[667]: E0908 13:35:58.640242     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:35:58 old-k8s-version-552634 kubelet[667]: E0908 13:35:58.641206     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:36:11 old-k8s-version-552634 kubelet[667]: I0908 13:36:11.636938     667 scope.go:117] "RemoveContainer" containerID="c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0"
	Sep 08 13:36:11 old-k8s-version-552634 kubelet[667]: E0908 13:36:11.637286     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:36:11 old-k8s-version-552634 kubelet[667]: E0908 13:36:11.638482     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:36:12 old-k8s-version-552634 kubelet[667]: E0908 13:36:12.637521     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:36:23 old-k8s-version-552634 kubelet[667]: I0908 13:36:23.636461     667 scope.go:117] "RemoveContainer" containerID="c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0"
	Sep 08 13:36:23 old-k8s-version-552634 kubelet[667]: E0908 13:36:23.636770     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:36:23 old-k8s-version-552634 kubelet[667]: E0908 13:36:23.637660     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:36:24 old-k8s-version-552634 kubelet[667]: E0908 13:36:24.638462     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:36:36 old-k8s-version-552634 kubelet[667]: I0908 13:36:36.636044     667 scope.go:117] "RemoveContainer" containerID="c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0"
	Sep 08 13:36:36 old-k8s-version-552634 kubelet[667]: E0908 13:36:36.641259     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:36:37 old-k8s-version-552634 kubelet[667]: E0908 13:36:37.636513     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:36:38 old-k8s-version-552634 kubelet[667]: E0908 13:36:38.638899     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	Sep 08 13:36:47 old-k8s-version-552634 kubelet[667]: I0908 13:36:47.636589     667 scope.go:117] "RemoveContainer" containerID="c6c71f6a40e94bc71ee571e0451bf82b5e477d3c91b7dfa6ef51c21f49a269c0"
	Sep 08 13:36:47 old-k8s-version-552634 kubelet[667]: E0908 13:36:47.637326     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fhtcn_kubernetes-dashboard(6c82aeae-a214-4365-a60a-f8075b3bee5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fhtcn" podUID="6c82aeae-a214-4365-a60a-f8075b3bee5f"
	Sep 08 13:36:48 old-k8s-version-552634 kubelet[667]: E0908 13:36:48.637848     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppxnd" podUID="1569882d-8116-4173-9410-def7ce94984b"
	Sep 08 13:36:53 old-k8s-version-552634 kubelet[667]: E0908 13:36:53.637944     667 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2x25h" podUID="834daeba-b747-4a9b-92f8-8b6002a56239"
	
	
	==> storage-provisioner [0b74b004307ae8f29b60bbbe51b55dd3ea17fad6807bb10d9fdaede541bcaa19] <==
	I0908 13:18:57.887092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 13:18:57.915119       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 13:18:57.918502       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0908 13:19:15.317463       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 13:19:15.317887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"20fbcd62-30a7-4d88-b856-ad9fb9fbe64d", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-552634_53f6aea1-7a70-4fd8-a67e-250e35f21845 became leader
	I0908 13:19:15.317956       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-552634_53f6aea1-7a70-4fd8-a67e-250e35f21845!
	I0908 13:19:15.418954       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-552634_53f6aea1-7a70-4fd8-a67e-250e35f21845!
	
	
	==> storage-provisioner [66645eab9b879074e918236fe3987ab393e3cfbf8d3bc59ea2e30b38c88ef369] <==
	I0908 13:18:15.931183       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 13:18:45.934234       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-552634 -n old-k8s-version-552634
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-552634 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-552634 describe pod metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-552634 describe pod metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h: exit status 1 (88.441194ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ppxnd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-2x25h" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-552634 describe pod metrics-server-57f55c9bc5-ppxnd kubernetes-dashboard-8694d4445c-2x25h: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-96c5t" [de89b47d-3928-4575-a5b7-9e358e5e0c42] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 13:31:17.268211 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:34.195867 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:47.956956 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:34.195150 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-978911 -n no-preload-978911
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-08 13:39:33.208638114 +0000 UTC m=+4003.544173195
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-978911 describe po kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context no-preload-978911 describe po kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-96c5t
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-978911/192.168.85.2
Start Time:       Mon, 08 Sep 2025 13:20:52 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m7tdb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-m7tdb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t to no-preload-978911
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m36s (x65 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m36s (x65 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-978911 logs kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-978911 logs kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard: exit status 1 (122.107888ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-96c5t" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context no-preload-978911 logs kubernetes-dashboard-855c9754f9-96c5t -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-978911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-978911
helpers_test.go:243: (dbg) docker inspect no-preload-978911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857",
	        "Created": "2025-09-08T13:19:03.26664771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2963100,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:20:36.233443019Z",
	            "FinishedAt": "2025-09-08T13:20:35.358612725Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857/hostname",
	        "HostsPath": "/var/lib/docker/containers/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857/hosts",
	        "LogPath": "/var/lib/docker/containers/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857/24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857-json.log",
	        "Name": "/no-preload-978911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-978911:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-978911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24330c1b4333ab96387b9f127aa2d45c0058b54f8f54fd8eabbd06dec8e91857",
	                "LowerDir": "/var/lib/docker/overlay2/2c3702870ce1f1ebd73a74b1c7bdc7c936e7e11927b295131daadcb49cea8fe8-init/diff:/var/lib/docker/overlay2/665ff8bb3d07b8342629df037737f3667c7c59d9d1f85930dc3dfdf138460626/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c3702870ce1f1ebd73a74b1c7bdc7c936e7e11927b295131daadcb49cea8fe8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c3702870ce1f1ebd73a74b1c7bdc7c936e7e11927b295131daadcb49cea8fe8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c3702870ce1f1ebd73a74b1c7bdc7c936e7e11927b295131daadcb49cea8fe8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-978911",
	                "Source": "/var/lib/docker/volumes/no-preload-978911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-978911",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-978911",
	                "name.minikube.sigs.k8s.io": "no-preload-978911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "109bcb7306c3e12c46f6025c1a9776c0f338398c89088746355bbef941de31ab",
	            "SandboxKey": "/var/run/docker/netns/109bcb7306c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36735"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-978911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:57:c9:ef:5d:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7fb24c92d376a0dfeabdb333b420b7ec19169f584cdf4a776434be0437a8941f",
	                    "EndpointID": "b6bb37a50f7122d779eb26d93dbdd8cdbf70525a449e6a6397c6a42d17b74a11",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-978911",
	                        "24330c1b4333"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-978911 -n no-preload-978911
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-978911 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-978911 logs -n 25: (1.939545226s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ cert-options-480035 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-480035    │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ ssh     │ -p cert-options-480035 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-480035    │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ delete  │ -p cert-options-480035                                                                                                                                                                                                                              │ cert-options-480035    │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:16 UTC │
	│ start   │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:16 UTC │ 08 Sep 25 13:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-552634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:17 UTC │
	│ stop    │ -p old-k8s-version-552634 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:17 UTC │ 08 Sep 25 13:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-552634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ start   │ -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ start   │ -p cert-expiration-713579 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-713579 │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:18 UTC │
	│ delete  │ -p cert-expiration-713579                                                                                                                                                                                                                           │ cert-expiration-713579 │ jenkins │ v1.36.0 │ 08 Sep 25 13:18 UTC │ 08 Sep 25 13:19 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-978911      │ jenkins │ v1.36.0 │ 08 Sep 25 13:19 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-978911      │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ stop    │ -p no-preload-978911 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-978911      │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable dashboard -p no-preload-978911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-978911      │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-978911      │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:21 UTC │
	│ image   │ old-k8s-version-552634 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-552634 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ unpause │ -p old-k8s-version-552634 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ delete  │ -p old-k8s-version-552634                                                                                                                                                                                                                           │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-552634                                                                                                                                                                                                                           │ old-k8s-version-552634 │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ start   │ -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-026461     │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-026461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-026461     │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ stop    │ -p embed-certs-026461 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-026461     │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-026461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-026461     │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ start   │ -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-026461     │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:39 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:38:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:38:23.431636 2973264 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:38:23.431753 2973264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:38:23.431764 2973264 out.go:374] Setting ErrFile to fd 2...
	I0908 13:38:23.431770 2973264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:38:23.432036 2973264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:38:23.432406 2973264 out.go:368] Setting JSON to false
	I0908 13:38:23.433381 2973264 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":69655,"bootTime":1757269048,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 13:38:23.433455 2973264 start.go:140] virtualization:  
	I0908 13:38:23.438330 2973264 out.go:179] * [embed-certs-026461] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:38:23.441536 2973264 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:38:23.441597 2973264 notify.go:220] Checking for updates...
	I0908 13:38:23.447407 2973264 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:38:23.450273 2973264 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:38:23.453072 2973264 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 13:38:23.455950 2973264 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:38:23.458897 2973264 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:38:23.462189 2973264 config.go:182] Loaded profile config "embed-certs-026461": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:38:23.462881 2973264 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:38:23.498684 2973264 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:38:23.498808 2973264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:38:23.558054 2973264 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:38:23.545607133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:38:23.558167 2973264 docker.go:318] overlay module found
	I0908 13:38:23.561307 2973264 out.go:179] * Using the docker driver based on existing profile
	I0908 13:38:23.564213 2973264 start.go:304] selected driver: docker
	I0908 13:38:23.564241 2973264 start.go:918] validating driver "docker" against &{Name:embed-certs-026461 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-026461 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:38:23.564342 2973264 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:38:23.565117 2973264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:38:23.624180 2973264 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:38:23.615250601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:38:23.624512 2973264 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:38:23.624541 2973264 cni.go:84] Creating CNI manager for ""
	I0908 13:38:23.624598 2973264 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:38:23.624639 2973264 start.go:348] cluster config:
	{Name:embed-certs-026461 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-026461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:38:23.627802 2973264 out.go:179] * Starting "embed-certs-026461" primary control-plane node in "embed-certs-026461" cluster
	I0908 13:38:23.630518 2973264 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:38:23.633358 2973264 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:38:23.636387 2973264 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:38:23.636449 2973264 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0908 13:38:23.636461 2973264 cache.go:58] Caching tarball of preloaded images
	I0908 13:38:23.636477 2973264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:38:23.636563 2973264 preload.go:172] Found /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 13:38:23.636574 2973264 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0908 13:38:23.636684 2973264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461/config.json ...
	I0908 13:38:23.655519 2973264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:38:23.655544 2973264 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:38:23.655557 2973264 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:38:23.655581 2973264 start.go:360] acquireMachinesLock for embed-certs-026461: {Name:mkf9ac2eca53de0e7fdfc910926094c35982af21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:38:23.655641 2973264 start.go:364] duration metric: took 35.961µs to acquireMachinesLock for "embed-certs-026461"
	I0908 13:38:23.655667 2973264 start.go:96] Skipping create...Using existing machine configuration
	I0908 13:38:23.655676 2973264 fix.go:54] fixHost starting: 
	I0908 13:38:23.655953 2973264 cli_runner.go:164] Run: docker container inspect embed-certs-026461 --format={{.State.Status}}
	I0908 13:38:23.673302 2973264 fix.go:112] recreateIfNeeded on embed-certs-026461: state=Stopped err=<nil>
	W0908 13:38:23.673331 2973264 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 13:38:23.676624 2973264 out.go:252] * Restarting existing docker container for "embed-certs-026461" ...
	I0908 13:38:23.676705 2973264 cli_runner.go:164] Run: docker start embed-certs-026461
	I0908 13:38:23.936063 2973264 cli_runner.go:164] Run: docker container inspect embed-certs-026461 --format={{.State.Status}}
	I0908 13:38:23.961514 2973264 kic.go:430] container "embed-certs-026461" state is running.
	I0908 13:38:23.961905 2973264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-026461
	I0908 13:38:23.979196 2973264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461/config.json ...
	I0908 13:38:23.979427 2973264 machine.go:93] provisionDockerMachine start ...
	I0908 13:38:23.979484 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:24.002320 2973264 main.go:141] libmachine: Using SSH client type: native
	I0908 13:38:24.002687 2973264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36743 <nil> <nil>}
	I0908 13:38:24.002699 2973264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:38:24.004096 2973264 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48002->127.0.0.1:36743: read: connection reset by peer
	I0908 13:38:27.133927 2973264 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-026461
	
	I0908 13:38:27.133948 2973264 ubuntu.go:182] provisioning hostname "embed-certs-026461"
	I0908 13:38:27.134034 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:27.151646 2973264 main.go:141] libmachine: Using SSH client type: native
	I0908 13:38:27.151964 2973264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36743 <nil> <nil>}
	I0908 13:38:27.151979 2973264 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-026461 && echo "embed-certs-026461" | sudo tee /etc/hostname
	I0908 13:38:27.291088 2973264 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-026461
	
	I0908 13:38:27.291191 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:27.309478 2973264 main.go:141] libmachine: Using SSH client type: native
	I0908 13:38:27.309812 2973264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36743 <nil> <nil>}
	I0908 13:38:27.309834 2973264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-026461' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-026461/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-026461' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:38:27.434445 2973264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:38:27.434513 2973264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
	I0908 13:38:27.434545 2973264 ubuntu.go:190] setting up certificates
	I0908 13:38:27.434557 2973264 provision.go:84] configureAuth start
	I0908 13:38:27.434620 2973264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-026461
	I0908 13:38:27.452451 2973264 provision.go:143] copyHostCerts
	I0908 13:38:27.452521 2973264 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
	I0908 13:38:27.452550 2973264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
	I0908 13:38:27.452629 2973264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
	I0908 13:38:27.452739 2973264 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
	I0908 13:38:27.452748 2973264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
	I0908 13:38:27.452791 2973264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
	I0908 13:38:27.452853 2973264 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
	I0908 13:38:27.452864 2973264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
	I0908 13:38:27.452890 2973264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
	I0908 13:38:27.452987 2973264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.embed-certs-026461 san=[127.0.0.1 192.168.76.2 embed-certs-026461 localhost minikube]
	I0908 13:38:27.961650 2973264 provision.go:177] copyRemoteCerts
	I0908 13:38:27.961726 2973264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:38:27.961774 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:27.980301 2973264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36743 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/embed-certs-026461/id_rsa Username:docker}
	I0908 13:38:28.076047 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 13:38:28.103319 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 13:38:28.128203 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 13:38:28.153203 2973264 provision.go:87] duration metric: took 718.613292ms to configureAuth
	I0908 13:38:28.153231 2973264 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:38:28.153438 2973264 config.go:182] Loaded profile config "embed-certs-026461": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:38:28.153458 2973264 machine.go:96] duration metric: took 4.174015266s to provisionDockerMachine
	I0908 13:38:28.153466 2973264 start.go:293] postStartSetup for "embed-certs-026461" (driver="docker")
	I0908 13:38:28.153476 2973264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:38:28.153527 2973264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:38:28.153579 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:28.170615 2973264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36743 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/embed-certs-026461/id_rsa Username:docker}
	I0908 13:38:28.263316 2973264 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:38:28.266380 2973264 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:38:28.266411 2973264 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:38:28.266421 2973264 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:38:28.266428 2973264 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:38:28.266438 2973264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
	I0908 13:38:28.266492 2973264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
	I0908 13:38:28.266567 2973264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
	I0908 13:38:28.266673 2973264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 13:38:28.275099 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:38:28.299306 2973264 start.go:296] duration metric: took 145.824131ms for postStartSetup
	I0908 13:38:28.299403 2973264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:38:28.299453 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:28.316122 2973264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36743 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/embed-certs-026461/id_rsa Username:docker}
	I0908 13:38:28.403250 2973264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:38:28.408380 2973264 fix.go:56] duration metric: took 4.75269715s for fixHost
	I0908 13:38:28.408402 2973264 start.go:83] releasing machines lock for "embed-certs-026461", held for 4.752747741s
	I0908 13:38:28.408469 2973264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-026461
	I0908 13:38:28.426862 2973264 ssh_runner.go:195] Run: cat /version.json
	I0908 13:38:28.426914 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:28.426918 2973264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:38:28.426971 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:28.447195 2973264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36743 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/embed-certs-026461/id_rsa Username:docker}
	I0908 13:38:28.461214 2973264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36743 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/embed-certs-026461/id_rsa Username:docker}
	I0908 13:38:28.664078 2973264 ssh_runner.go:195] Run: systemctl --version
	I0908 13:38:28.668430 2973264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:38:28.672505 2973264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 13:38:28.690151 2973264 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:38:28.690238 2973264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:38:28.699184 2973264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 13:38:28.699213 2973264 start.go:495] detecting cgroup driver to use...
	I0908 13:38:28.699244 2973264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:38:28.699302 2973264 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 13:38:28.713176 2973264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 13:38:28.726982 2973264 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:38:28.727062 2973264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:38:28.739842 2973264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:38:28.751670 2973264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:38:28.832370 2973264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:38:28.917293 2973264 docker.go:234] disabling docker service ...
	I0908 13:38:28.917361 2973264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:38:28.929774 2973264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:38:28.941324 2973264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:38:29.028528 2973264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:38:29.120313 2973264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:38:29.132631 2973264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:38:29.149622 2973264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 13:38:29.160669 2973264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 13:38:29.171070 2973264 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 13:38:29.171194 2973264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 13:38:29.181406 2973264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:38:29.191640 2973264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 13:38:29.202010 2973264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:38:29.212799 2973264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:38:29.222911 2973264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 13:38:29.233310 2973264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 13:38:29.243906 2973264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 13:38:29.255102 2973264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:38:29.264222 2973264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:38:29.273121 2973264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:38:29.369416 2973264 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 13:38:29.513484 2973264 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 13:38:29.513556 2973264 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 13:38:29.517697 2973264 start.go:563] Will wait 60s for crictl version
	I0908 13:38:29.517757 2973264 ssh_runner.go:195] Run: which crictl
	I0908 13:38:29.521260 2973264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:38:29.575214 2973264 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 13:38:29.575286 2973264 ssh_runner.go:195] Run: containerd --version
	I0908 13:38:29.600551 2973264 ssh_runner.go:195] Run: containerd --version
	I0908 13:38:29.628270 2973264 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 13:38:29.631446 2973264 cli_runner.go:164] Run: docker network inspect embed-certs-026461 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:38:29.648268 2973264 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0908 13:38:29.652228 2973264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:38:29.663685 2973264 kubeadm.go:875] updating cluster {Name:embed-certs-026461 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-026461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:38:29.663807 2973264 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:38:29.663871 2973264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:38:29.702500 2973264 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:38:29.702525 2973264 containerd.go:534] Images already preloaded, skipping extraction
	I0908 13:38:29.702589 2973264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:38:29.737878 2973264 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:38:29.737901 2973264 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:38:29.737908 2973264 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
	I0908 13:38:29.738001 2973264 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-026461 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:embed-certs-026461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:38:29.738071 2973264 ssh_runner.go:195] Run: sudo crictl info
	I0908 13:38:29.779507 2973264 cni.go:84] Creating CNI manager for ""
	I0908 13:38:29.779535 2973264 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:38:29.779546 2973264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:38:29.779568 2973264 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-026461 NodeName:embed-certs-026461 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:38:29.779689 2973264 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-026461"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:38:29.779758 2973264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:38:29.788915 2973264 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:38:29.788999 2973264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:38:29.799606 2973264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0908 13:38:29.818657 2973264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:38:29.836753 2973264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I0908 13:38:29.855848 2973264 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:38:29.859531 2973264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:38:29.871703 2973264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:38:29.953434 2973264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:38:29.970267 2973264 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461 for IP: 192.168.76.2
	I0908 13:38:29.970290 2973264 certs.go:194] generating shared ca certs ...
	I0908 13:38:29.970305 2973264 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:38:29.970608 2973264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
	I0908 13:38:29.970675 2973264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
	I0908 13:38:29.970688 2973264 certs.go:256] generating profile certs ...
	I0908 13:38:29.970813 2973264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461/client.key
	I0908 13:38:29.970905 2973264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461/apiserver.key.d95db358
	I0908 13:38:29.970980 2973264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461/proxy-client.key
	I0908 13:38:29.971127 2973264 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
	W0908 13:38:29.971184 2973264 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
	I0908 13:38:29.971199 2973264 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 13:38:29.971225 2973264 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
	I0908 13:38:29.971282 2973264 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:38:29.971336 2973264 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
	I0908 13:38:29.971408 2973264 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:38:29.972032 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:38:29.997723 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 13:38:30.036134 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:38:30.083377 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:38:30.124737 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0908 13:38:30.159034 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 13:38:30.191940 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:38:30.221698 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/embed-certs-026461/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 13:38:30.250530 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
	I0908 13:38:30.277722 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:38:30.308289 2973264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
	I0908 13:38:30.345157 2973264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:38:30.370743 2973264 ssh_runner.go:195] Run: openssl version
	I0908 13:38:30.378053 2973264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
	I0908 13:38:30.391058 2973264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
	I0908 13:38:30.394801 2973264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/27511142.pem
	I0908 13:38:30.394907 2973264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
	I0908 13:38:30.401795 2973264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:38:30.410931 2973264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:38:30.420374 2973264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:38:30.424021 2973264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:38:30.424086 2973264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:38:30.431241 2973264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:38:30.440737 2973264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
	I0908 13:38:30.451114 2973264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
	I0908 13:38:30.455203 2973264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/2751114.pem
	I0908 13:38:30.455287 2973264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
	I0908 13:38:30.462730 2973264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
	I0908 13:38:30.474691 2973264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:38:30.478666 2973264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 13:38:30.485981 2973264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 13:38:30.493669 2973264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 13:38:30.500464 2973264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 13:38:30.507615 2973264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 13:38:30.514888 2973264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 13:38:30.521854 2973264 kubeadm.go:392] StartCluster: {Name:embed-certs-026461 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-026461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:38:30.521963 2973264 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 13:38:30.522054 2973264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:38:30.567295 2973264 cri.go:89] found id: "892c1c2b789a9b15df80bcdb4cba8e6e44a2700d2d45afd91c68b9df0f978e52"
	I0908 13:38:30.567319 2973264 cri.go:89] found id: "fe3cede4fe48003419ad4d922a72cd175b10badc10a1731c781055596c825baa"
	I0908 13:38:30.567324 2973264 cri.go:89] found id: "e213037c3d5577450863d222bd09abd0b099c0a413275757e8478e475168e6f3"
	I0908 13:38:30.567328 2973264 cri.go:89] found id: "c5c57e9e7465f3b6dcc007639128213d7d29165743b8342185639d44196c0e60"
	I0908 13:38:30.567337 2973264 cri.go:89] found id: "b33eda9a841a9cd6dabfae54d7049877b124dd1b907cbc696c92b7f9454c0782"
	I0908 13:38:30.567341 2973264 cri.go:89] found id: "6330ead2ac8c8438ea6f4bb5f9acb7aa56cc82f9b368662b5bb91e80534650db"
	I0908 13:38:30.567344 2973264 cri.go:89] found id: "d28195f477cc9f19233c21d3f187a2ddca9c454ef053874df5af77d0ffc92680"
	I0908 13:38:30.567348 2973264 cri.go:89] found id: "d2738e9189b007c276ce183af874bcd7acf236cb12e77b9760e42726ff6eefbb"
	I0908 13:38:30.567351 2973264 cri.go:89] found id: ""
	I0908 13:38:30.567402 2973264 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0908 13:38:30.580059 2973264 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-08T13:38:30Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0908 13:38:30.580131 2973264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:38:30.589463 2973264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 13:38:30.589484 2973264 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 13:38:30.589543 2973264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 13:38:30.597953 2973264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 13:38:30.598622 2973264 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-026461" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:38:30.598897 2973264 kubeconfig.go:62] /home/jenkins/minikube-integration/21508-2749258/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-026461" cluster setting kubeconfig missing "embed-certs-026461" context setting]
	I0908 13:38:30.599631 2973264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:38:30.605707 2973264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 13:38:30.619485 2973264 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0908 13:38:30.619522 2973264 kubeadm.go:593] duration metric: took 30.032343ms to restartPrimaryControlPlane
	I0908 13:38:30.619532 2973264 kubeadm.go:394] duration metric: took 97.688021ms to StartCluster
	I0908 13:38:30.619548 2973264 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:38:30.619610 2973264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:38:30.620876 2973264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:38:30.621071 2973264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 13:38:30.621372 2973264 config.go:182] Loaded profile config "embed-certs-026461": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:38:30.621423 2973264 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 13:38:30.621492 2973264 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-026461"
	I0908 13:38:30.621506 2973264 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-026461"
	W0908 13:38:30.621518 2973264 addons.go:247] addon storage-provisioner should already be in state true
	I0908 13:38:30.621538 2973264 host.go:66] Checking if "embed-certs-026461" exists ...
	I0908 13:38:30.621649 2973264 addons.go:69] Setting default-storageclass=true in profile "embed-certs-026461"
	I0908 13:38:30.621672 2973264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-026461"
	I0908 13:38:30.622011 2973264 cli_runner.go:164] Run: docker container inspect embed-certs-026461 --format={{.State.Status}}
	I0908 13:38:30.622706 2973264 cli_runner.go:164] Run: docker container inspect embed-certs-026461 --format={{.State.Status}}
	I0908 13:38:30.622829 2973264 addons.go:69] Setting dashboard=true in profile "embed-certs-026461"
	I0908 13:38:30.622847 2973264 addons.go:238] Setting addon dashboard=true in "embed-certs-026461"
	W0908 13:38:30.622854 2973264 addons.go:247] addon dashboard should already be in state true
	I0908 13:38:30.622879 2973264 host.go:66] Checking if "embed-certs-026461" exists ...
	I0908 13:38:30.623259 2973264 cli_runner.go:164] Run: docker container inspect embed-certs-026461 --format={{.State.Status}}
	I0908 13:38:30.625756 2973264 addons.go:69] Setting metrics-server=true in profile "embed-certs-026461"
	I0908 13:38:30.626153 2973264 addons.go:238] Setting addon metrics-server=true in "embed-certs-026461"
	W0908 13:38:30.626210 2973264 addons.go:247] addon metrics-server should already be in state true
	I0908 13:38:30.626305 2973264 host.go:66] Checking if "embed-certs-026461" exists ...
	I0908 13:38:30.627663 2973264 out.go:179] * Verifying Kubernetes components...
	I0908 13:38:30.629986 2973264 cli_runner.go:164] Run: docker container inspect embed-certs-026461 --format={{.State.Status}}
	I0908 13:38:30.632591 2973264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:38:30.670971 2973264 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 13:38:30.675059 2973264 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 13:38:30.683275 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 13:38:30.683302 2973264 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 13:38:30.683367 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:30.729678 2973264 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:38:30.732129 2973264 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:38:30.732162 2973264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:38:30.732219 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:30.733135 2973264 addons.go:238] Setting addon default-storageclass=true in "embed-certs-026461"
	W0908 13:38:30.733151 2973264 addons.go:247] addon default-storageclass should already be in state true
	I0908 13:38:30.733175 2973264 host.go:66] Checking if "embed-certs-026461" exists ...
	I0908 13:38:30.733606 2973264 cli_runner.go:164] Run: docker container inspect embed-certs-026461 --format={{.State.Status}}
	I0908 13:38:30.738636 2973264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36743 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/embed-certs-026461/id_rsa Username:docker}
	I0908 13:38:30.739926 2973264 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 13:38:30.746435 2973264 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:38:30.746462 2973264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:38:30.746540 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:30.785127 2973264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36743 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/embed-certs-026461/id_rsa Username:docker}
	I0908 13:38:30.798567 2973264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36743 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/embed-certs-026461/id_rsa Username:docker}
	I0908 13:38:30.802541 2973264 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:38:30.802560 2973264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:38:30.802624 2973264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-026461
	I0908 13:38:30.828363 2973264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36743 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/embed-certs-026461/id_rsa Username:docker}
	I0908 13:38:30.850430 2973264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:38:30.906862 2973264 node_ready.go:35] waiting up to 6m0s for node "embed-certs-026461" to be "Ready" ...
	I0908 13:38:31.018255 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 13:38:31.018391 2973264 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 13:38:31.075255 2973264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:38:31.102569 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 13:38:31.102646 2973264 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 13:38:31.107817 2973264 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:38:31.107890 2973264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 13:38:31.139592 2973264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:38:31.181743 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 13:38:31.181817 2973264 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 13:38:31.210285 2973264 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:38:31.210395 2973264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:38:31.306591 2973264 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:38:31.306668 2973264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:38:31.310678 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 13:38:31.310750 2973264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 13:38:31.414481 2973264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:38:31.574183 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 13:38:31.574260 2973264 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 13:38:31.768198 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 13:38:31.768276 2973264 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 13:38:31.875009 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 13:38:31.875087 2973264 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 13:38:31.995706 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 13:38:31.995771 2973264 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 13:38:32.052958 2973264 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:38:32.053048 2973264 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 13:38:32.100702 2973264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:38:35.932656 2973264 node_ready.go:49] node "embed-certs-026461" is "Ready"
	I0908 13:38:35.932682 2973264 node_ready.go:38] duration metric: took 5.02573443s for node "embed-certs-026461" to be "Ready" ...
	I0908 13:38:35.932696 2973264 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:38:35.932756 2973264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:38:38.595986 2973264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.520640712s)
	I0908 13:38:38.596033 2973264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.456358241s)
	I0908 13:38:38.596082 2973264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.181526022s)
	I0908 13:38:38.596093 2973264 addons.go:479] Verifying addon metrics-server=true in "embed-certs-026461"
	I0908 13:38:38.596187 2973264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.495397385s)
	I0908 13:38:38.596868 2973264 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.664097759s)
	I0908 13:38:38.596896 2973264 api_server.go:72] duration metric: took 7.975794048s to wait for apiserver process to appear ...
	I0908 13:38:38.596902 2973264 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:38:38.596915 2973264 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 13:38:38.599644 2973264 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-026461 addons enable metrics-server
	
	I0908 13:38:38.611192 2973264 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 13:38:38.611313 2973264 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 13:38:38.620914 2973264 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0908 13:38:38.623828 2973264 addons.go:514] duration metric: took 8.002389581s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0908 13:38:39.097623 2973264 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 13:38:39.105894 2973264 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0908 13:38:39.106915 2973264 api_server.go:141] control plane version: v1.34.0
	I0908 13:38:39.106938 2973264 api_server.go:131] duration metric: took 510.029551ms to wait for apiserver health ...
	I0908 13:38:39.106947 2973264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:38:39.110279 2973264 system_pods.go:59] 9 kube-system pods found
	I0908 13:38:39.110319 2973264 system_pods.go:61] "coredns-66bc5c9577-4wnmn" [803fa849-05d7-44ca-af9d-6abfa917c06d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:38:39.110329 2973264 system_pods.go:61] "etcd-embed-certs-026461" [59012630-7fca-4b36-9f5b-4c7be9fb6d5e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:38:39.110335 2973264 system_pods.go:61] "kindnet-9flm7" [0dd7159f-4642-483d-b716-65af7cab6dd0] Running
	I0908 13:38:39.110385 2973264 system_pods.go:61] "kube-apiserver-embed-certs-026461" [c7c40d4c-0d32-4558-909a-7cd91730cbe1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:38:39.110403 2973264 system_pods.go:61] "kube-controller-manager-embed-certs-026461" [1dc4265d-a3e6-4c22-9cac-944b0bb826ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:38:39.110408 2973264 system_pods.go:61] "kube-proxy-xsnrj" [46ce8c20-e99b-41a6-9153-3d6e94512976] Running
	I0908 13:38:39.110415 2973264 system_pods.go:61] "kube-scheduler-embed-certs-026461" [77d24eae-f5f9-431e-8905-fcb2b0e7a894] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:38:39.110424 2973264 system_pods.go:61] "metrics-server-746fcd58dc-wxqrm" [abebbfc6-c3ac-4cc3-b788-e4dce09f134f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:38:39.110429 2973264 system_pods.go:61] "storage-provisioner" [e7cdde3b-06be-4343-8d0c-85d4ae813c46] Running
	I0908 13:38:39.110436 2973264 system_pods.go:74] duration metric: took 3.483286ms to wait for pod list to return data ...
	I0908 13:38:39.110448 2973264 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:38:39.112827 2973264 default_sa.go:45] found service account: "default"
	I0908 13:38:39.112854 2973264 default_sa.go:55] duration metric: took 2.399088ms for default service account to be created ...
	I0908 13:38:39.112863 2973264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:38:39.115579 2973264 system_pods.go:86] 9 kube-system pods found
	I0908 13:38:39.115628 2973264 system_pods.go:89] "coredns-66bc5c9577-4wnmn" [803fa849-05d7-44ca-af9d-6abfa917c06d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:38:39.115638 2973264 system_pods.go:89] "etcd-embed-certs-026461" [59012630-7fca-4b36-9f5b-4c7be9fb6d5e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:38:39.115652 2973264 system_pods.go:89] "kindnet-9flm7" [0dd7159f-4642-483d-b716-65af7cab6dd0] Running
	I0908 13:38:39.115660 2973264 system_pods.go:89] "kube-apiserver-embed-certs-026461" [c7c40d4c-0d32-4558-909a-7cd91730cbe1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:38:39.115671 2973264 system_pods.go:89] "kube-controller-manager-embed-certs-026461" [1dc4265d-a3e6-4c22-9cac-944b0bb826ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:38:39.115676 2973264 system_pods.go:89] "kube-proxy-xsnrj" [46ce8c20-e99b-41a6-9153-3d6e94512976] Running
	I0908 13:38:39.115684 2973264 system_pods.go:89] "kube-scheduler-embed-certs-026461" [77d24eae-f5f9-431e-8905-fcb2b0e7a894] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:38:39.115695 2973264 system_pods.go:89] "metrics-server-746fcd58dc-wxqrm" [abebbfc6-c3ac-4cc3-b788-e4dce09f134f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:38:39.115699 2973264 system_pods.go:89] "storage-provisioner" [e7cdde3b-06be-4343-8d0c-85d4ae813c46] Running
	I0908 13:38:39.115706 2973264 system_pods.go:126] duration metric: took 2.837687ms to wait for k8s-apps to be running ...
	I0908 13:38:39.115717 2973264 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:38:39.115775 2973264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:38:39.128432 2973264 system_svc.go:56] duration metric: took 12.703916ms WaitForService to wait for kubelet
	I0908 13:38:39.128459 2973264 kubeadm.go:578] duration metric: took 8.507355798s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:38:39.128489 2973264 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:38:39.131413 2973264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 13:38:39.131448 2973264 node_conditions.go:123] node cpu capacity is 2
	I0908 13:38:39.131461 2973264 node_conditions.go:105] duration metric: took 2.965765ms to run NodePressure ...
	I0908 13:38:39.131475 2973264 start.go:241] waiting for startup goroutines ...
	I0908 13:38:39.131482 2973264 start.go:246] waiting for cluster config update ...
	I0908 13:38:39.131494 2973264 start.go:255] writing updated cluster config ...
	I0908 13:38:39.131798 2973264 ssh_runner.go:195] Run: rm -f paused
	I0908 13:38:39.135669 2973264 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:38:39.210945 2973264 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4wnmn" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 13:38:41.216089 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:38:43.217564 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:38:45.231251 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:38:47.716677 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:38:49.716979 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:38:52.216239 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:38:54.216469 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:38:56.216595 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:38:58.716912 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:39:00.717455 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:39:03.216811 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:39:05.716433 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:39:07.716529 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:39:10.216231 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:39:12.216637 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	W0908 13:39:14.216752 2973264 pod_ready.go:104] pod "coredns-66bc5c9577-4wnmn" is not "Ready", error: <nil>
	I0908 13:39:14.716097 2973264 pod_ready.go:94] pod "coredns-66bc5c9577-4wnmn" is "Ready"
	I0908 13:39:14.716129 2973264 pod_ready.go:86] duration metric: took 35.505157895s for pod "coredns-66bc5c9577-4wnmn" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:14.718840 2973264 pod_ready.go:83] waiting for pod "etcd-embed-certs-026461" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:14.723617 2973264 pod_ready.go:94] pod "etcd-embed-certs-026461" is "Ready"
	I0908 13:39:14.723650 2973264 pod_ready.go:86] duration metric: took 4.776503ms for pod "etcd-embed-certs-026461" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:14.725900 2973264 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-026461" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:14.730618 2973264 pod_ready.go:94] pod "kube-apiserver-embed-certs-026461" is "Ready"
	I0908 13:39:14.730644 2973264 pod_ready.go:86] duration metric: took 4.718576ms for pod "kube-apiserver-embed-certs-026461" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:14.733089 2973264 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-026461" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:14.915668 2973264 pod_ready.go:94] pod "kube-controller-manager-embed-certs-026461" is "Ready"
	I0908 13:39:14.915695 2973264 pod_ready.go:86] duration metric: took 182.579532ms for pod "kube-controller-manager-embed-certs-026461" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:15.115780 2973264 pod_ready.go:83] waiting for pod "kube-proxy-xsnrj" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:15.513903 2973264 pod_ready.go:94] pod "kube-proxy-xsnrj" is "Ready"
	I0908 13:39:15.513935 2973264 pod_ready.go:86] duration metric: took 398.126531ms for pod "kube-proxy-xsnrj" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:15.714953 2973264 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-026461" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:16.114741 2973264 pod_ready.go:94] pod "kube-scheduler-embed-certs-026461" is "Ready"
	I0908 13:39:16.114771 2973264 pod_ready.go:86] duration metric: took 399.790394ms for pod "kube-scheduler-embed-certs-026461" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:39:16.114784 2973264 pod_ready.go:40] duration metric: took 36.979084749s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:39:16.185302 2973264 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:39:16.188596 2973264 out.go:179] * Done! kubectl is now configured to use "embed-certs-026461" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	438701d402f78       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   8                   411da2a550179       dashboard-metrics-scraper-6ffb444bf9-fpptr
	93d2704d5a3f9       66749159455b3       18 minutes ago      Running             storage-provisioner         2                   19efbda2c0a5e       storage-provisioner
	d824c7b540ec1       138784d87c9c5       18 minutes ago      Running             coredns                     1                   6cb3e5f9141b7       coredns-66bc5c9577-7www8
	4621bfda7dec4       1611cd07b61d5       18 minutes ago      Running             busybox                     1                   f2102d5258939       busybox
	682d750821b44       b1a8c6f707935       18 minutes ago      Running             kindnet-cni                 1                   98fe16de354ae       kindnet-8cc7v
	e3598e5f85914       66749159455b3       18 minutes ago      Exited              storage-provisioner         1                   19efbda2c0a5e       storage-provisioner
	03b6d80d695b8       6fc32d66c1411       18 minutes ago      Running             kube-proxy                  1                   a3583924057c7       kube-proxy-zb84d
	b824f7e2b00ef       a1894772a478e       18 minutes ago      Running             etcd                        1                   e6c013134cb72       etcd-no-preload-978911
	5bb6d3486bdd6       a25f5ef9c34c3       18 minutes ago      Running             kube-scheduler              1                   76496f1d745fa       kube-scheduler-no-preload-978911
	3d049d3c26e41       d291939e99406       18 minutes ago      Running             kube-apiserver              1                   77e31618cee33       kube-apiserver-no-preload-978911
	ea8f24cba6f98       996be7e86d9b3       18 minutes ago      Running             kube-controller-manager     1                   4f510283fc1d4       kube-controller-manager-no-preload-978911
	b2d7f168abc96       1611cd07b61d5       19 minutes ago      Exited              busybox                     0                   e842701f59485       busybox
	bdd85fd62e900       138784d87c9c5       19 minutes ago      Exited              coredns                     0                   f6df22374ec66       coredns-66bc5c9577-7www8
	e685e05b08b51       b1a8c6f707935       19 minutes ago      Exited              kindnet-cni                 0                   dc4559b71e9bf       kindnet-8cc7v
	d5252e4ac54a4       6fc32d66c1411       19 minutes ago      Exited              kube-proxy                  0                   8af2378844944       kube-proxy-zb84d
	f5e8fe9a2b29c       a25f5ef9c34c3       20 minutes ago      Exited              kube-scheduler              0                   988dbfe721d3c       kube-scheduler-no-preload-978911
	e59a4771913f0       996be7e86d9b3       20 minutes ago      Exited              kube-controller-manager     0                   578cc7b14d6e0       kube-controller-manager-no-preload-978911
	453e5e825289a       d291939e99406       20 minutes ago      Exited              kube-apiserver              0                   73f1bd4a88515       kube-apiserver-no-preload-978911
	89cf83ed06352       a1894772a478e       20 minutes ago      Exited              etcd                        0                   cf3cbf7dab6c0       etcd-no-preload-978911
	
	
	==> containerd <==
	Sep 08 13:31:59 no-preload-978911 containerd[575]: time="2025-09-08T13:31:59.728364773Z" level=info msg="received exit event container_id:\"ab34cb3e596b643c5b42ed6720836a02721a322546006166e1d8c92cb8fdf767\" id:\"ab34cb3e596b643c5b42ed6720836a02721a322546006166e1d8c92cb8fdf767\" pid:3176 exit_status:255 exited_at:{seconds:1757338319 nanos:728091845}"
	Sep 08 13:31:59 no-preload-978911 containerd[575]: time="2025-09-08T13:31:59.758138193Z" level=info msg="shim disconnected" id=ab34cb3e596b643c5b42ed6720836a02721a322546006166e1d8c92cb8fdf767 namespace=k8s.io
	Sep 08 13:31:59 no-preload-978911 containerd[575]: time="2025-09-08T13:31:59.758184017Z" level=warning msg="cleaning up after shim disconnected" id=ab34cb3e596b643c5b42ed6720836a02721a322546006166e1d8c92cb8fdf767 namespace=k8s.io
	Sep 08 13:31:59 no-preload-978911 containerd[575]: time="2025-09-08T13:31:59.758222588Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:32:00 no-preload-978911 containerd[575]: time="2025-09-08T13:32:00.718328379Z" level=info msg="RemoveContainer for \"932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393\""
	Sep 08 13:32:00 no-preload-978911 containerd[575]: time="2025-09-08T13:32:00.725549695Z" level=info msg="RemoveContainer for \"932b7513f2edf1227c74a665e4a4d32d9093c6e1512a5c3d71378dd149362393\" returns successfully"
	Sep 08 13:36:41 no-preload-978911 containerd[575]: time="2025-09-08T13:36:41.638713347Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 08 13:36:41 no-preload-978911 containerd[575]: time="2025-09-08T13:36:41.647500495Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 08 13:36:41 no-preload-978911 containerd[575]: time="2025-09-08T13:36:41.649553993Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 08 13:36:41 no-preload-978911 containerd[575]: time="2025-09-08T13:36:41.649556980Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 08 13:36:58 no-preload-978911 containerd[575]: time="2025-09-08T13:36:58.639056764Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:36:58 no-preload-978911 containerd[575]: time="2025-09-08T13:36:58.641447279Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:36:58 no-preload-978911 containerd[575]: time="2025-09-08T13:36:58.799484677Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:36:59 no-preload-978911 containerd[575]: time="2025-09-08T13:36:59.113324317Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 13:36:59 no-preload-978911 containerd[575]: time="2025-09-08T13:36:59.113564787Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 08 13:37:05 no-preload-978911 containerd[575]: time="2025-09-08T13:37:05.640864761Z" level=info msg="CreateContainer within sandbox \"411da2a5501797ee6af497aee8ce43558927a623457b05e0f7c477289219eea7\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Sep 08 13:37:05 no-preload-978911 containerd[575]: time="2025-09-08T13:37:05.668997807Z" level=info msg="CreateContainer within sandbox \"411da2a5501797ee6af497aee8ce43558927a623457b05e0f7c477289219eea7\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0\""
	Sep 08 13:37:05 no-preload-978911 containerd[575]: time="2025-09-08T13:37:05.671361123Z" level=info msg="StartContainer for \"438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0\""
	Sep 08 13:37:05 no-preload-978911 containerd[575]: time="2025-09-08T13:37:05.807772488Z" level=info msg="received exit event container_id:\"438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0\" id:\"438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0\" pid:3291 exit_status:255 exited_at:{seconds:1757338625 nanos:803190378}"
	Sep 08 13:37:05 no-preload-978911 containerd[575]: time="2025-09-08T13:37:05.808233030Z" level=info msg="StartContainer for \"438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0\" returns successfully"
	Sep 08 13:37:07 no-preload-978911 containerd[575]: time="2025-09-08T13:37:07.283234818Z" level=info msg="shim disconnected" id=438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0 namespace=k8s.io
	Sep 08 13:37:07 no-preload-978911 containerd[575]: time="2025-09-08T13:37:07.283272601Z" level=warning msg="cleaning up after shim disconnected" id=438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0 namespace=k8s.io
	Sep 08 13:37:07 no-preload-978911 containerd[575]: time="2025-09-08T13:37:07.283307464Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:37:07 no-preload-978911 containerd[575]: time="2025-09-08T13:37:07.575907187Z" level=info msg="RemoveContainer for \"ab34cb3e596b643c5b42ed6720836a02721a322546006166e1d8c92cb8fdf767\""
	Sep 08 13:37:07 no-preload-978911 containerd[575]: time="2025-09-08T13:37:07.585057615Z" level=info msg="RemoveContainer for \"ab34cb3e596b643c5b42ed6720836a02721a322546006166e1d8c92cb8fdf767\" returns successfully"
	
	
	==> coredns [bdd85fd62e90072711cf66c0db968c1136a28f624fc072df147df0fc494584c8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46697 - 39181 "HINFO IN 3567785162455233463.2380029076477560907. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.053623907s
	
	
	==> coredns [d824c7b540ec1bc7cbfe79da017657418f04a7cd09baf2d649873ca9006eab62] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47636 - 5170 "HINFO IN 777682767407552810.8526643665007990753. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024371508s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-978911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-978911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=no-preload-978911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_19_36_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:19:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-978911
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:39:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:35:56 +0000   Mon, 08 Sep 2025 13:19:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:35:56 +0000   Mon, 08 Sep 2025 13:19:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:35:56 +0000   Mon, 08 Sep 2025 13:19:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:35:56 +0000   Mon, 08 Sep 2025 13:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-978911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3dc0098bad94666b8110d0f8beda533
	  System UUID:                da8051b6-1408-4715-82b5-8cf1ac6dab71
	  Boot ID:                    9f5228b8-b58e-4b72-938a-84f5f7e9d841
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-7www8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-no-preload-978911                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-8cc7v                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-no-preload-978911              250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-no-preload-978911     200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-zb84d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-978911              100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-vh962               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fpptr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-96c5t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-978911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-978911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-978911 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node no-preload-978911 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node no-preload-978911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m                kubelet          Node no-preload-978911 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           19m                node-controller  Node no-preload-978911 event: Registered Node no-preload-978911 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-978911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-978911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-978911 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node no-preload-978911 event: Registered Node no-preload-978911 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [89cf83ed06352d9266afeb8d98085daf1e7cc6dfe5636d2a24ff0d4804025f62] <==
	{"level":"warn","ts":"2025-09-08T13:19:30.969407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:30.994219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.015279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.032104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.051624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.081709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.093479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.112752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.148035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.164589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.187940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.208175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.247257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.251205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.276140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.291354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.309207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.326852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.347417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.382335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.439252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.459233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.479141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.497168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:19:31.581389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	
	
	==> etcd [b824f7e2b00ef8bf7362bf6ae6b8ba0555a19feffebfcd3c008135d3ac1eb9b2] <==
	{"level":"warn","ts":"2025-09-08T13:20:46.922442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.938236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:46.983766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.009644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.038996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.052217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.070220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.096892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.116807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.139181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.152782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.172934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.187802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.206860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.255422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.293734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.313316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.352606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:20:47.404990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:30:45.841627Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1080}
	{"level":"info","ts":"2025-09-08T13:30:45.936167Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1080,"took":"94.179596ms","hash":3634714459,"current-db-size-bytes":3276800,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1392640,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-08T13:30:45.936226Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3634714459,"revision":1080,"compact-revision":-1}
	{"level":"info","ts":"2025-09-08T13:35:45.850678Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1336}
	{"level":"info","ts":"2025-09-08T13:35:45.854519Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1336,"took":"3.352425ms","hash":407554430,"current-db-size-bytes":3276800,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1851392,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-08T13:35:45.854563Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":407554430,"revision":1336,"compact-revision":1080}
	
	
	==> kernel <==
	 13:39:35 up 19:22,  0 users,  load average: 1.80, 1.28, 1.40
	Linux no-preload-978911 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [682d750821b448b7795c3718a4b32c348bd1b033b722bd25ecbc55adc59d318c] <==
	I0908 13:37:30.610539       1 main.go:301] handling current node
	I0908 13:37:40.612454       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:37:40.612710       1 main.go:301] handling current node
	I0908 13:37:50.610659       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:37:50.610877       1 main.go:301] handling current node
	I0908 13:38:00.616436       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:38:00.616473       1 main.go:301] handling current node
	I0908 13:38:10.618769       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:38:10.618807       1 main.go:301] handling current node
	I0908 13:38:20.614009       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:38:20.614058       1 main.go:301] handling current node
	I0908 13:38:30.611238       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:38:30.611480       1 main.go:301] handling current node
	I0908 13:38:40.618465       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:38:40.618499       1 main.go:301] handling current node
	I0908 13:38:50.610386       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:38:50.610479       1 main.go:301] handling current node
	I0908 13:39:00.611558       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:39:00.611769       1 main.go:301] handling current node
	I0908 13:39:10.618436       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:39:10.618477       1 main.go:301] handling current node
	I0908 13:39:20.611696       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:39:20.611731       1 main.go:301] handling current node
	I0908 13:39:30.611189       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:39:30.611224       1 main.go:301] handling current node
	
	
	==> kindnet [e685e05b08b51e456163cacd1644bf5fab5dd1c6118ed288241b851f6da29e62] <==
	I0908 13:19:44.099861       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:19:44.100229       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0908 13:19:44.100430       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:19:44.100447       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:19:44.100465       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:19:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:19:44.400390       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:19:44.400424       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:19:44.400434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:19:44.400568       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0908 13:19:44.601353       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:19:44.601377       1 metrics.go:72] Registering metrics
	I0908 13:19:44.601466       1 controller.go:711] "Syncing nftables rules"
	I0908 13:19:54.309979       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:19:54.310044       1 main.go:301] handling current node
	I0908 13:20:04.313950       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:20:04.313990       1 main.go:301] handling current node
	I0908 13:20:14.305827       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:20:14.305903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3d049d3c26e41d0f9ed4a0f3d86102b35674f514ee9cbb2644fcfbd0f075b0f5] <==
	I0908 13:35:49.501854       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:35:52.218049       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:36:49.500910       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:36:49.500969       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:36:49.501128       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:36:49.502439       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:36:49.502484       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:36:49.502496       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:36:51.639495       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:36:53.286033       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:37:56.728173       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:38:19.747555       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:38:49.501983       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:38:49.502220       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:38:49.502241       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:38:49.503058       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:38:49.503091       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:38:49.504252       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:38:58.776152       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [453e5e825289a6e70e8cee4d4d3e9be4fa57968b9f3101e0486c55f00773e336] <==
	I0908 13:19:40.567306       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0908 13:19:40.653495       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:19:40.668225       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:19:40.717810       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E0908 13:20:22.493966       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:58200: use of closed network connection
	I0908 13:20:23.279549       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:20:23.284065       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:20:23.284120       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:20:23.284163       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0908 13:20:23.463420       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.111.134.213"}
	W0908 13:20:23.484744       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:20:23.485096       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:20:23.496408       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0908 13:20:23.512457       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:20:23.512510       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:20:23.514079       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [e59a4771913f0c586033aa2f970d5003227c9262bc5c73b7ef6007c8ab2801a0] <==
	I0908 13:19:39.626822       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 13:19:39.629457       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-978911" podCIDRs=["10.244.0.0/24"]
	I0908 13:19:39.632235       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 13:19:39.635497       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0908 13:19:39.639683       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:19:39.645980       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 13:19:39.659962       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:19:39.660512       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 13:19:39.662930       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 13:19:39.663120       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 13:19:39.663379       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 13:19:39.664014       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 13:19:39.664051       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 13:19:39.664498       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 13:19:39.664530       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 13:19:39.665035       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 13:19:39.665066       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 13:19:39.665118       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 13:19:39.665261       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 13:19:39.665405       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-978911"
	I0908 13:19:39.665492       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 13:19:39.669290       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0908 13:19:39.671950       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 13:19:39.672010       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	E0908 13:20:23.326965       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ea8f24cba6f98f3abd27cc0b37d1412932c064fa95e51d2bbe79bb10ebb23742] <==
	I0908 13:33:22.128598       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:33:51.994836       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:33:52.136034       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:34:21.999200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:34:22.143571       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:34:52.004870       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:34:52.151757       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:35:22.012599       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:35:22.160021       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:35:52.018193       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:35:52.168161       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:36:22.023456       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:36:22.175686       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:36:52.028584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:36:52.182098       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:37:22.034025       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:37:22.191332       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:37:52.038795       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:37:52.198897       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:38:22.044289       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:38:22.206924       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:38:52.049025       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:38:52.218487       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:39:22.054252       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:39:22.225598       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [03b6d80d695b892c4255a8bd73705ce24a9e32cfb00dc95b821b0b1f0cd2322e] <==
	I0908 13:20:50.175099       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:20:50.265210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:20:50.366277       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:20:50.366329       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0908 13:20:50.366423       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:20:50.431589       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:20:50.431649       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:20:50.462832       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:20:50.463144       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:20:50.463158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:20:50.470254       1 config.go:200] "Starting service config controller"
	I0908 13:20:50.470279       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:20:50.470298       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:20:50.470302       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:20:50.471541       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:20:50.471559       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:20:50.476343       1 config.go:309] "Starting node config controller"
	I0908 13:20:50.476367       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:20:50.476375       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:20:50.571786       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:20:50.571873       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 13:20:50.572271       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d5252e4ac54a43b7539b2bfe24a8a0183a6b9420e5f2255895a872dd266dfbdd] <==
	I0908 13:19:41.469721       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:19:41.564302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:19:41.677273       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:19:41.677314       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0908 13:19:41.677410       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:19:41.735733       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:19:41.735784       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:19:41.742438       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:19:41.742776       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:19:41.742791       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:19:41.745859       1 config.go:200] "Starting service config controller"
	I0908 13:19:41.749671       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:19:41.749752       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:19:41.749759       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:19:41.749775       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:19:41.749790       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:19:41.750641       1 config.go:309] "Starting node config controller"
	I0908 13:19:41.750649       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:19:41.750656       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:19:41.850981       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:19:41.851024       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:19:41.851066       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5bb6d3486bdd61e30d737319dc82458ac9bf73e9632bce01c4788fc396dfa9e7] <==
	I0908 13:20:47.242443       1 serving.go:386] Generated self-signed cert in-memory
	W0908 13:20:48.457432       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 13:20:48.460456       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 13:20:48.460487       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 13:20:48.460509       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 13:20:48.625230       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:20:48.625262       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:20:48.654687       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:20:48.654802       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:20:48.654563       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:20:48.663649       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:20:48.767308       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f5e8fe9a2b29ca8f991932c0c60513abc177286d77ac00c6ac9f77de28096302] <==
	E0908 13:19:32.837217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 13:19:32.837418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 13:19:32.837497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:19:32.837925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 13:19:32.838028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 13:19:32.838075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:19:32.838116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 13:19:32.838133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 13:19:32.854626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:19:32.854962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:19:32.855189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 13:19:32.855562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:19:32.855540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:19:32.855742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 13:19:33.662431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:19:33.673449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:19:33.681196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 13:19:33.741281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:19:33.790626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:19:33.830089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:19:33.848593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:19:33.903870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 13:19:33.909363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 13:19:33.943886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0908 13:19:35.817652       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:38:15 no-preload-978911 kubelet[660]: E0908 13:38:15.637524     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:38:20 no-preload-978911 kubelet[660]: E0908 13:38:20.638329     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:38:21 no-preload-978911 kubelet[660]: E0908 13:38:21.638472     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:38:27 no-preload-978911 kubelet[660]: I0908 13:38:27.637789     660 scope.go:117] "RemoveContainer" containerID="438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0"
	Sep 08 13:38:27 no-preload-978911 kubelet[660]: E0908 13:38:27.638001     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:38:32 no-preload-978911 kubelet[660]: E0908 13:38:32.640206     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:38:34 no-preload-978911 kubelet[660]: E0908 13:38:34.640479     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:38:39 no-preload-978911 kubelet[660]: I0908 13:38:39.638120     660 scope.go:117] "RemoveContainer" containerID="438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0"
	Sep 08 13:38:39 no-preload-978911 kubelet[660]: E0908 13:38:39.638269     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:38:46 no-preload-978911 kubelet[660]: E0908 13:38:46.639870     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:38:48 no-preload-978911 kubelet[660]: E0908 13:38:48.638724     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:38:54 no-preload-978911 kubelet[660]: I0908 13:38:54.637843     660 scope.go:117] "RemoveContainer" containerID="438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0"
	Sep 08 13:38:54 no-preload-978911 kubelet[660]: E0908 13:38:54.638026     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:38:59 no-preload-978911 kubelet[660]: E0908 13:38:59.638726     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:39:01 no-preload-978911 kubelet[660]: E0908 13:39:01.638748     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:39:06 no-preload-978911 kubelet[660]: I0908 13:39:06.638053     660 scope.go:117] "RemoveContainer" containerID="438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0"
	Sep 08 13:39:06 no-preload-978911 kubelet[660]: E0908 13:39:06.638262     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:39:12 no-preload-978911 kubelet[660]: E0908 13:39:12.639336     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:39:14 no-preload-978911 kubelet[660]: E0908 13:39:14.638829     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:39:20 no-preload-978911 kubelet[660]: I0908 13:39:20.637905     660 scope.go:117] "RemoveContainer" containerID="438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0"
	Sep 08 13:39:20 no-preload-978911 kubelet[660]: E0908 13:39:20.638621     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	Sep 08 13:39:25 no-preload-978911 kubelet[660]: E0908 13:39:25.638940     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-96c5t" podUID="de89b47d-3928-4575-a5b7-9e358e5e0c42"
	Sep 08 13:39:25 no-preload-978911 kubelet[660]: E0908 13:39:25.639453     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-vh962" podUID="959e88f4-10f0-4c5b-98da-0451d012b212"
	Sep 08 13:39:35 no-preload-978911 kubelet[660]: I0908 13:39:35.638225     660 scope.go:117] "RemoveContainer" containerID="438701d402f78eee15259d2d135f0faf5854cab9b9d47b572df1d174a30221b0"
	Sep 08 13:39:35 no-preload-978911 kubelet[660]: E0908 13:39:35.638500     660 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fpptr_kubernetes-dashboard(88b356d0-f549-4960-8ef8-c0268121025b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fpptr" podUID="88b356d0-f549-4960-8ef8-c0268121025b"
	
	
	==> storage-provisioner [93d2704d5a3f99f28ddbee6e156a03effcbeea7e70c0bbe5b78ec6fd44e67b40] <==
	W0908 13:39:10.334846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:12.337553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:12.342398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:14.346248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:14.353269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:16.358573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:16.365007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:18.368131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:18.373537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:20.376953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:20.386048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:22.388935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:22.393593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:24.396812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:24.403999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:26.407254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:26.411721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:28.415831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:28.421698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:30.426173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:30.432324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:32.436156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:32.440984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:34.445586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:39:34.450712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e3598e5f859140c77eac8e1a536c27af9dacfad772d78e969b9b77d58a1b1b4b] <==
	I0908 13:20:50.099375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 13:21:20.104169       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-978911 -n no-preload-978911
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-978911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-vh962 kubernetes-dashboard-855c9754f9-96c5t
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-978911 describe pod metrics-server-746fcd58dc-vh962 kubernetes-dashboard-855c9754f9-96c5t
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-978911 describe pod metrics-server-746fcd58dc-vh962 kubernetes-dashboard-855c9754f9-96c5t: exit status 1 (84.040547ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-vh962" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-96c5t" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-978911 describe pod metrics-server-746fcd58dc-vh962 kubernetes-dashboard-855c9754f9-96c5t: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pfn9p" [39d2d704-928d-4791-9c0d-9ec08f500ce1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-026461 -n embed-certs-026461
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-08 13:48:16.951783624 +0000 UTC m=+4527.287318705
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-026461 describe po kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-026461 describe po kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-pfn9p
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-026461/192.168.76.2
Start Time:       Mon, 08 Sep 2025 13:38:41 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z4gmf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-z4gmf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m35s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p to embed-certs-026461
Normal   Pulling    6m41s (x5 over 9m35s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m41s (x5 over 9m31s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m41s (x5 over 9m31s)   kubelet            Error: ErrImagePull
Warning  Failed     4m28s (x20 over 9m30s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m13s (x21 over 9m30s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-026461 logs kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-026461 logs kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard: exit status 1 (111.154206ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-pfn9p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-026461 logs kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-026461
helpers_test.go:243: (dbg) docker inspect embed-certs-026461:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc",
	        "Created": "2025-09-08T13:37:08.502145223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2973398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:38:23.710884187Z",
	            "FinishedAt": "2025-09-08T13:38:22.859667311Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc/hosts",
	        "LogPath": "/var/lib/docker/containers/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc-json.log",
	        "Name": "/embed-certs-026461",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-026461:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-026461",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc",
	                "LowerDir": "/var/lib/docker/overlay2/fc702e533bca44ff9da292bc30c0d03c44c5a7f8c7dde34c7f5fe9612679d56b-init/diff:/var/lib/docker/overlay2/665ff8bb3d07b8342629df037737f3667c7c59d9d1f85930dc3dfdf138460626/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc702e533bca44ff9da292bc30c0d03c44c5a7f8c7dde34c7f5fe9612679d56b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc702e533bca44ff9da292bc30c0d03c44c5a7f8c7dde34c7f5fe9612679d56b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc702e533bca44ff9da292bc30c0d03c44c5a7f8c7dde34c7f5fe9612679d56b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-026461",
	                "Source": "/var/lib/docker/volumes/embed-certs-026461/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-026461",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-026461",
	                "name.minikube.sigs.k8s.io": "embed-certs-026461",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ddd62593d3e3d337468b52f16616d44ac2518e880ec849c39110d012099c89a7",
	            "SandboxKey": "/var/run/docker/netns/ddd62593d3e3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36743"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36744"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36747"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36745"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36746"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-026461": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:f0:05:8d:98:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "004e010c32c3a805df6c59891f6cd9ea228937a7febd6161ebfd8e3248f7c98d",
	                    "EndpointID": "f2c55bea3e5e3c645c141ea58adeaf79f52205363cf7ccadc9f097cd4db315d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-026461",
	                        "4689353bfe63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-026461 -n embed-certs-026461
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-026461 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-026461 logs -n 25: (1.958076296s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                        │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ stop    │ -p no-preload-978911 --alsologtostderr -v=3                                                                                                                                    │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable dashboard -p no-preload-978911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                   │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                  │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:21 UTC │
	│ image   │ old-k8s-version-552634 image list --format=json                                                                                                                                │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-552634 --alsologtostderr -v=1                                                                                                                               │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ unpause │ -p old-k8s-version-552634 --alsologtostderr -v=1                                                                                                                               │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ delete  │ -p old-k8s-version-552634                                                                                                                                                      │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-552634                                                                                                                                                      │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ start   │ -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                   │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-026461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                       │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ stop    │ -p embed-certs-026461 --alsologtostderr -v=3                                                                                                                                   │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-026461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                  │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ start   │ -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                   │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:39 UTC │
	│ image   │ no-preload-978911 image list --format=json                                                                                                                                     │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ pause   │ -p no-preload-978911 --alsologtostderr -v=1                                                                                                                                    │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ unpause │ -p no-preload-978911 --alsologtostderr -v=1                                                                                                                                    │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ delete  │ -p no-preload-978911                                                                                                                                                           │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ delete  │ -p no-preload-978911                                                                                                                                                           │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ delete  │ -p disable-driver-mounts-406618                                                                                                                                                │ disable-driver-mounts-406618 │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ start   │ -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-153420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:41 UTC │
	│ stop    │ -p default-k8s-diff-port-153420 --alsologtostderr -v=3                                                                                                                         │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:41 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-153420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                        │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:41 UTC │
	│ start   │ -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:42 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:41:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:41:38.183493 2981832 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:41:38.183928 2981832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:41:38.183942 2981832 out.go:374] Setting ErrFile to fd 2...
	I0908 13:41:38.183947 2981832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:41:38.184718 2981832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:41:38.185334 2981832 out.go:368] Setting JSON to false
	I0908 13:41:38.186589 2981832 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":69850,"bootTime":1757269048,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 13:41:38.186804 2981832 start.go:140] virtualization:  
	I0908 13:41:38.189846 2981832 out.go:179] * [default-k8s-diff-port-153420] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:41:38.196363 2981832 notify.go:220] Checking for updates...
	I0908 13:41:38.197252 2981832 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:41:38.200580 2981832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:41:38.203670 2981832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:41:38.206693 2981832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 13:41:38.209759 2981832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:41:38.212733 2981832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:41:38.216133 2981832 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:41:38.216703 2981832 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:41:38.244102 2981832 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:41:38.244225 2981832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:41:38.300959 2981832 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:41:38.291668591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:41:38.301106 2981832 docker.go:318] overlay module found
	I0908 13:41:38.306196 2981832 out.go:179] * Using the docker driver based on existing profile
	I0908 13:41:38.309086 2981832 start.go:304] selected driver: docker
	I0908 13:41:38.309108 2981832 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:41:38.309238 2981832 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:41:38.309979 2981832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:41:38.365841 2981832 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:41:38.355845341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:41:38.366183 2981832 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:41:38.366209 2981832 cni.go:84] Creating CNI manager for ""
	I0908 13:41:38.366271 2981832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:41:38.366314 2981832 start.go:348] cluster config:
	{Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:41:38.371332 2981832 out.go:179] * Starting "default-k8s-diff-port-153420" primary control-plane node in "default-k8s-diff-port-153420" cluster
	I0908 13:41:38.374100 2981832 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:41:38.376872 2981832 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:41:38.379608 2981832 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:41:38.379687 2981832 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0908 13:41:38.379700 2981832 cache.go:58] Caching tarball of preloaded images
	I0908 13:41:38.379697 2981832 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:41:38.379828 2981832 preload.go:172] Found /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 13:41:38.379902 2981832 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0908 13:41:38.380061 2981832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/config.json ...
	I0908 13:41:38.401117 2981832 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:41:38.401143 2981832 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:41:38.401157 2981832 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:41:38.401186 2981832 start.go:360] acquireMachinesLock for default-k8s-diff-port-153420: {Name:mk3c73424b25a4402932f8a2bf50cf6e7d938bf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:41:38.401245 2981832 start.go:364] duration metric: took 36.085µs to acquireMachinesLock for "default-k8s-diff-port-153420"
	I0908 13:41:38.401271 2981832 start.go:96] Skipping create...Using existing machine configuration
	I0908 13:41:38.401281 2981832 fix.go:54] fixHost starting: 
	I0908 13:41:38.401625 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:38.419256 2981832 fix.go:112] recreateIfNeeded on default-k8s-diff-port-153420: state=Stopped err=<nil>
	W0908 13:41:38.419293 2981832 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 13:41:38.422593 2981832 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-153420" ...
	I0908 13:41:38.422688 2981832 cli_runner.go:164] Run: docker start default-k8s-diff-port-153420
	I0908 13:41:38.676966 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:38.702603 2981832 kic.go:430] container "default-k8s-diff-port-153420" state is running.
	I0908 13:41:38.703182 2981832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-153420
	I0908 13:41:38.726048 2981832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/config.json ...
	I0908 13:41:38.726267 2981832 machine.go:93] provisionDockerMachine start ...
	I0908 13:41:38.726326 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:38.751024 2981832 main.go:141] libmachine: Using SSH client type: native
	I0908 13:41:38.751413 2981832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36753 <nil> <nil>}
	I0908 13:41:38.751437 2981832 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:41:38.752484 2981832 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0908 13:41:41.881945 2981832 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153420
	
	I0908 13:41:41.881969 2981832 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-153420"
	I0908 13:41:41.882033 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:41.900806 2981832 main.go:141] libmachine: Using SSH client type: native
	I0908 13:41:41.901097 2981832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36753 <nil> <nil>}
	I0908 13:41:41.901113 2981832 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-153420 && echo "default-k8s-diff-port-153420" | sudo tee /etc/hostname
	I0908 13:41:42.043802 2981832 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153420
	
	I0908 13:41:42.043926 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:42.063664 2981832 main.go:141] libmachine: Using SSH client type: native
	I0908 13:41:42.064010 2981832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36753 <nil> <nil>}
	I0908 13:41:42.064034 2981832 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-153420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-153420/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-153420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:41:42.208530 2981832 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:41:42.208619 2981832 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
	I0908 13:41:42.208665 2981832 ubuntu.go:190] setting up certificates
	I0908 13:41:42.208700 2981832 provision.go:84] configureAuth start
	I0908 13:41:42.208801 2981832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-153420
	I0908 13:41:42.241395 2981832 provision.go:143] copyHostCerts
	I0908 13:41:42.241494 2981832 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
	I0908 13:41:42.241528 2981832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
	I0908 13:41:42.241622 2981832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
	I0908 13:41:42.241751 2981832 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
	I0908 13:41:42.241762 2981832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
	I0908 13:41:42.243697 2981832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
	I0908 13:41:42.243885 2981832 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
	I0908 13:41:42.243895 2981832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
	I0908 13:41:42.243955 2981832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
	I0908 13:41:42.244111 2981832 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-153420 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-153420 localhost minikube]
	I0908 13:41:42.945907 2981832 provision.go:177] copyRemoteCerts
	I0908 13:41:42.945982 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:41:42.946022 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:42.967469 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.059401 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 13:41:43.084522 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0908 13:41:43.110008 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 13:41:43.134954 2981832 provision.go:87] duration metric: took 926.213948ms to configureAuth
	I0908 13:41:43.134982 2981832 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:41:43.135265 2981832 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:41:43.135280 2981832 machine.go:96] duration metric: took 4.409005757s to provisionDockerMachine
	I0908 13:41:43.135290 2981832 start.go:293] postStartSetup for "default-k8s-diff-port-153420" (driver="docker")
	I0908 13:41:43.135321 2981832 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:41:43.135393 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:41:43.135440 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.153733 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.247411 2981832 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:41:43.250658 2981832 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:41:43.250692 2981832 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:41:43.250721 2981832 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:41:43.250733 2981832 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:41:43.250744 2981832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
	I0908 13:41:43.250816 2981832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
	I0908 13:41:43.250920 2981832 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
	I0908 13:41:43.251026 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 13:41:43.259545 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:41:43.283800 2981832 start.go:296] duration metric: took 148.481655ms for postStartSetup
	I0908 13:41:43.283920 2981832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:41:43.283981 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.301163 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.391171 2981832 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:41:43.395681 2981832 fix.go:56] duration metric: took 4.994393846s for fixHost
	I0908 13:41:43.395708 2981832 start.go:83] releasing machines lock for "default-k8s-diff-port-153420", held for 4.99444972s
	I0908 13:41:43.395781 2981832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-153420
	I0908 13:41:43.413236 2981832 ssh_runner.go:195] Run: cat /version.json
	I0908 13:41:43.413295 2981832 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:41:43.413379 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.413298 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.435775 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.442467 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.526010 2981832 ssh_runner.go:195] Run: systemctl --version
	I0908 13:41:43.662562 2981832 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:41:43.666918 2981832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 13:41:43.685126 2981832 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:41:43.685200 2981832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:41:43.695329 2981832 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 13:41:43.695405 2981832 start.go:495] detecting cgroup driver to use...
	I0908 13:41:43.695446 2981832 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:41:43.695512 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 13:41:43.710570 2981832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 13:41:43.722733 2981832 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:41:43.722852 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:41:43.736500 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:41:43.748238 2981832 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:41:43.835148 2981832 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:41:43.920806 2981832 docker.go:234] disabling docker service ...
	I0908 13:41:43.920891 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:41:43.934002 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:41:43.946701 2981832 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:41:44.039460 2981832 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:41:44.132792 2981832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:41:44.145057 2981832 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:41:44.162337 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 13:41:44.172997 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 13:41:44.183329 2981832 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 13:41:44.183447 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 13:41:44.195372 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:41:44.205520 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 13:41:44.215693 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:41:44.225574 2981832 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:41:44.234906 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 13:41:44.245066 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 13:41:44.259873 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 13:41:44.271354 2981832 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:41:44.280087 2981832 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:41:44.289321 2981832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:44.379774 2981832 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 13:41:44.536929 2981832 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 13:41:44.537068 2981832 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 13:41:44.541466 2981832 start.go:563] Will wait 60s for crictl version
	I0908 13:41:44.541578 2981832 ssh_runner.go:195] Run: which crictl
	I0908 13:41:44.546581 2981832 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:41:44.587566 2981832 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 13:41:44.587635 2981832 ssh_runner.go:195] Run: containerd --version
	I0908 13:41:44.617501 2981832 ssh_runner.go:195] Run: containerd --version
	I0908 13:41:44.645009 2981832 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 13:41:44.647977 2981832 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-153420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:41:44.664925 2981832 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0908 13:41:44.668554 2981832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:41:44.679714 2981832 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:41:44.679841 2981832 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:41:44.679903 2981832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:41:44.717857 2981832 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:41:44.717878 2981832 containerd.go:534] Images already preloaded, skipping extraction
	I0908 13:41:44.717938 2981832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:41:44.752831 2981832 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:41:44.752857 2981832 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:41:44.752865 2981832 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.0 containerd true true} ...
	I0908 13:41:44.752966 2981832 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-153420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:41:44.753037 2981832 ssh_runner.go:195] Run: sudo crictl info
	I0908 13:41:44.791842 2981832 cni.go:84] Creating CNI manager for ""
	I0908 13:41:44.791869 2981832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:41:44.791879 2981832 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:41:44.791900 2981832 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-153420 NodeName:default-k8s-diff-port-153420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:41:44.792040 2981832 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-153420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:41:44.792121 2981832 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:41:44.801483 2981832 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:41:44.801559 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:41:44.810818 2981832 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0908 13:41:44.829172 2981832 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:41:44.852071 2981832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I0908 13:41:44.870014 2981832 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:41:44.873389 2981832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:41:44.884321 2981832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:44.973680 2981832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:41:44.987906 2981832 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420 for IP: 192.168.85.2
	I0908 13:41:44.987944 2981832 certs.go:194] generating shared ca certs ...
	I0908 13:41:44.987960 2981832 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:44.988134 2981832 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
	I0908 13:41:44.988187 2981832 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
	I0908 13:41:44.988199 2981832 certs.go:256] generating profile certs ...
	I0908 13:41:44.988304 2981832 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.key
	I0908 13:41:44.988383 2981832 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/apiserver.key.92270838
	I0908 13:41:44.988425 2981832 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/proxy-client.key
	I0908 13:41:44.988557 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
	W0908 13:41:44.988595 2981832 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
	I0908 13:41:44.988615 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 13:41:44.988650 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
	I0908 13:41:44.988688 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:41:44.988714 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
	I0908 13:41:44.988776 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:41:44.989412 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:41:45.025156 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 13:41:45.059536 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:41:45.096817 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:41:45.136081 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0908 13:41:45.171754 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 13:41:45.227729 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:41:45.277742 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 13:41:45.355221 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
	I0908 13:41:45.413854 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
	I0908 13:41:45.446934 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:41:45.474684 2981832 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:41:45.502534 2981832 ssh_runner.go:195] Run: openssl version
	I0908 13:41:45.508817 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
	I0908 13:41:45.520778 2981832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
	I0908 13:41:45.525122 2981832 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/27511142.pem
	I0908 13:41:45.525226 2981832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
	I0908 13:41:45.533361 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:41:45.543369 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:41:45.555876 2981832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:41:45.560388 2981832 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:41:45.560477 2981832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:41:45.568121 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:41:45.579965 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
	I0908 13:41:45.590008 2981832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
	I0908 13:41:45.593729 2981832 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/2751114.pem
	I0908 13:41:45.593823 2981832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
	I0908 13:41:45.602488 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
	I0908 13:41:45.611965 2981832 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:41:45.616750 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 13:41:45.624097 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 13:41:45.632225 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 13:41:45.639956 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 13:41:45.647406 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 13:41:45.655098 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 13:41:45.664382 2981832 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:41:45.664530 2981832 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 13:41:45.664630 2981832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:41:45.715127 2981832 cri.go:89] found id: "c92bcab10a87ad0871a640aa80b2152b1fd54da7db385c67cbfd5b5c9f43fe8e"
	I0908 13:41:45.715202 2981832 cri.go:89] found id: "e2230598fc5039a765624af1e3f63ed99d4770d693871426274fab0e4fc0ea30"
	I0908 13:41:45.715221 2981832 cri.go:89] found id: "c83f774f7cb15ec989c3b862182b8ada28a7212590cabe73828ad3cbc652e5c9"
	I0908 13:41:45.715240 2981832 cri.go:89] found id: "1e493f2b781b56da57b2e825d02a2a9f27a28c4bc9bd840c5b5e3a91641a57fd"
	I0908 13:41:45.715266 2981832 cri.go:89] found id: "560003c0df9ab37f0413fad8d4cfa177c37cab99ce50a679a4dfe3d45f9b9806"
	I0908 13:41:45.715288 2981832 cri.go:89] found id: "80d5d4731aabeab6901bda592deb55b198b5210763d8c88d00d21c2622dc391c"
	I0908 13:41:45.715304 2981832 cri.go:89] found id: "b4369cf69b787c103dc54593695e0c7b6fd035f36d82f02245556ec1854ce8d4"
	I0908 13:41:45.715318 2981832 cri.go:89] found id: "45662498456929054ae3b40fdbb16f05b31d0498f1bdb935defcd60bd0a73b31"
	I0908 13:41:45.715334 2981832 cri.go:89] found id: ""
	I0908 13:41:45.715407 2981832 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0908 13:41:45.728111 2981832 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-08T13:41:45Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0908 13:41:45.728309 2981832 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:41:45.744855 2981832 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 13:41:45.744925 2981832 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 13:41:45.745008 2981832 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 13:41:45.758017 2981832 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 13:41:45.758974 2981832 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-153420" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:41:45.759633 2981832 kubeconfig.go:62] /home/jenkins/minikube-integration/21508-2749258/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-153420" cluster setting kubeconfig missing "default-k8s-diff-port-153420" context setting]
	I0908 13:41:45.760482 2981832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:45.762167 2981832 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 13:41:45.773698 2981832 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0908 13:41:45.773774 2981832 kubeadm.go:593] duration metric: took 28.831006ms to restartPrimaryControlPlane
	I0908 13:41:45.773801 2981832 kubeadm.go:394] duration metric: took 109.426992ms to StartCluster
	I0908 13:41:45.773842 2981832 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:45.773919 2981832 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:41:45.775548 2981832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:45.775848 2981832 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 13:41:45.776240 2981832 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:41:45.776314 2981832 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 13:41:45.776388 2981832 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.776402 2981832 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.776408 2981832 addons.go:247] addon storage-provisioner should already be in state true
	I0908 13:41:45.776427 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.776719 2981832 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.776737 2981832 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-153420"
	I0908 13:41:45.777016 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.777428 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.777850 2981832 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.777973 2981832 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.778000 2981832 addons.go:247] addon dashboard should already be in state true
	I0908 13:41:45.778036 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.778631 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.783534 2981832 out.go:179] * Verifying Kubernetes components...
	I0908 13:41:45.777899 2981832 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.785885 2981832 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.785898 2981832 addons.go:247] addon metrics-server should already be in state true
	I0908 13:41:45.785942 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.788526 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.803815 2981832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:45.817447 2981832 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.817470 2981832 addons.go:247] addon default-storageclass should already be in state true
	I0908 13:41:45.817494 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.817919 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.882494 2981832 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 13:41:45.885825 2981832 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:41:45.888756 2981832 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 13:41:45.888823 2981832 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:45.888838 2981832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:41:45.888902 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.891659 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 13:41:45.891687 2981832 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 13:41:45.891765 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.897849 2981832 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 13:41:45.906583 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:41:45.906609 2981832 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:41:45.906686 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.912845 2981832 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:45.912867 2981832 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:41:45.912951 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.944790 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:45.963396 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:45.981736 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:45.987651 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:46.038283 2981832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:41:46.097701 2981832 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-153420" to be "Ready" ...
	I0908 13:41:46.202930 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 13:41:46.203003 2981832 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 13:41:46.219579 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:46.255728 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:46.277898 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:41:46.277973 2981832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 13:41:46.293406 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 13:41:46.293483 2981832 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 13:41:46.444761 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:41:46.444839 2981832 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:41:46.452013 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 13:41:46.452088 2981832 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 13:41:46.723006 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 13:41:46.723031 2981832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 13:41:46.737594 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:46.737618 2981832 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:41:46.844771 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 13:41:46.844810 2981832 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 13:41:46.880658 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:46.970510 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 13:41:46.970552 2981832 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 13:41:47.075018 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 13:41:47.075057 2981832 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 13:41:47.138830 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 13:41:47.138860 2981832 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 13:41:47.224905 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:41:47.224945 2981832 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 13:41:47.269920 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:41:51.416689 2981832 node_ready.go:49] node "default-k8s-diff-port-153420" is "Ready"
	I0908 13:41:51.416720 2981832 node_ready.go:38] duration metric: took 5.318929962s for node "default-k8s-diff-port-153420" to be "Ready" ...
	I0908 13:41:51.416734 2981832 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:41:51.416805 2981832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:41:54.129335 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.909675131s)
	I0908 13:41:54.129395 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.873575587s)
	I0908 13:41:54.216295 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.335593788s)
	I0908 13:41:54.216325 2981832 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-153420"
	I0908 13:41:54.216424 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.946470309s)
	I0908 13:41:54.216652 2981832 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.799818943s)
	I0908 13:41:54.216671 2981832 api_server.go:72] duration metric: took 8.440772445s to wait for apiserver process to appear ...
	I0908 13:41:54.216678 2981832 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:41:54.216704 2981832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0908 13:41:54.219870 2981832 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-153420 addons enable metrics-server
	
	I0908 13:41:54.222703 2981832 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 13:41:54.225583 2981832 addons.go:514] duration metric: took 8.449262837s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 13:41:54.226611 2981832 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 13:41:54.226646 2981832 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 13:41:54.717240 2981832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0908 13:41:54.726476 2981832 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0908 13:41:54.727542 2981832 api_server.go:141] control plane version: v1.34.0
	I0908 13:41:54.727563 2981832 api_server.go:131] duration metric: took 510.878431ms to wait for apiserver health ...
	I0908 13:41:54.727573 2981832 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:41:54.731655 2981832 system_pods.go:59] 9 kube-system pods found
	I0908 13:41:54.731697 2981832 system_pods.go:61] "coredns-66bc5c9577-q4jkl" [9b1fa2ae-9787-4693-8166-a084bcb3270d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:54.731711 2981832 system_pods.go:61] "etcd-default-k8s-diff-port-153420" [85041d04-c452-4816-a3a3-65374cf50610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:41:54.731718 2981832 system_pods.go:61] "kindnet-pmj4f" [82339286-acd0-449d-b09e-c551b7e26217] Running
	I0908 13:41:54.731727 2981832 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153420" [d256a46a-4fad-4315-b1aa-852ba091f72d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:41:54.731735 2981832 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153420" [e5053ecb-6919-45b7-8476-c9a25c10ecd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:41:54.731749 2981832 system_pods.go:61] "kube-proxy-hqvv8" [a208e720-5648-4b6a-ab64-cde29e26ce5f] Running
	I0908 13:41:54.731756 2981832 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153420" [5599901d-5667-4d46-910b-6c8b97d28411] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:41:54.731763 2981832 system_pods.go:61] "metrics-server-746fcd58dc-5wgrj" [0d04be90-cd69-469c-a678-e605db608f3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:54.731770 2981832 system_pods.go:61] "storage-provisioner" [bdcd8ab1-174b-4b90-b1ee-15e8c109514a] Running
	I0908 13:41:54.731776 2981832 system_pods.go:74] duration metric: took 4.197197ms to wait for pod list to return data ...
	I0908 13:41:54.731784 2981832 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:41:54.734255 2981832 default_sa.go:45] found service account: "default"
	I0908 13:41:54.734279 2981832 default_sa.go:55] duration metric: took 2.485556ms for default service account to be created ...
	I0908 13:41:54.734289 2981832 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:41:54.737366 2981832 system_pods.go:86] 9 kube-system pods found
	I0908 13:41:54.737411 2981832 system_pods.go:89] "coredns-66bc5c9577-q4jkl" [9b1fa2ae-9787-4693-8166-a084bcb3270d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:54.737421 2981832 system_pods.go:89] "etcd-default-k8s-diff-port-153420" [85041d04-c452-4816-a3a3-65374cf50610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:41:54.737427 2981832 system_pods.go:89] "kindnet-pmj4f" [82339286-acd0-449d-b09e-c551b7e26217] Running
	I0908 13:41:54.737434 2981832 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-153420" [d256a46a-4fad-4315-b1aa-852ba091f72d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:41:54.737446 2981832 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-153420" [e5053ecb-6919-45b7-8476-c9a25c10ecd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:41:54.737453 2981832 system_pods.go:89] "kube-proxy-hqvv8" [a208e720-5648-4b6a-ab64-cde29e26ce5f] Running
	I0908 13:41:54.737460 2981832 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-153420" [5599901d-5667-4d46-910b-6c8b97d28411] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:41:54.737470 2981832 system_pods.go:89] "metrics-server-746fcd58dc-5wgrj" [0d04be90-cd69-469c-a678-e605db608f3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:54.737483 2981832 system_pods.go:89] "storage-provisioner" [bdcd8ab1-174b-4b90-b1ee-15e8c109514a] Running
	I0908 13:41:54.737497 2981832 system_pods.go:126] duration metric: took 3.202144ms to wait for k8s-apps to be running ...
	I0908 13:41:54.737504 2981832 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:41:54.737565 2981832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:41:54.749866 2981832 system_svc.go:56] duration metric: took 12.349515ms WaitForService to wait for kubelet
	I0908 13:41:54.749891 2981832 kubeadm.go:578] duration metric: took 8.97399128s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:41:54.749910 2981832 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:41:54.753038 2981832 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 13:41:54.753079 2981832 node_conditions.go:123] node cpu capacity is 2
	I0908 13:41:54.753097 2981832 node_conditions.go:105] duration metric: took 3.179704ms to run NodePressure ...
	I0908 13:41:54.753110 2981832 start.go:241] waiting for startup goroutines ...
	I0908 13:41:54.753120 2981832 start.go:246] waiting for cluster config update ...
	I0908 13:41:54.753134 2981832 start.go:255] writing updated cluster config ...
	I0908 13:41:54.753431 2981832 ssh_runner.go:195] Run: rm -f paused
	I0908 13:41:54.756792 2981832 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:41:54.760701 2981832 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q4jkl" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 13:41:56.771105 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:41:59.266096 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:01.268327 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:03.765282 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:05.766762 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:08.266832 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:10.267442 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:12.765755 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:14.766606 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:17.266472 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:19.766587 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:22.265694 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:24.266392 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:26.766225 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	I0908 13:42:27.265787 2981832 pod_ready.go:94] pod "coredns-66bc5c9577-q4jkl" is "Ready"
	I0908 13:42:27.265819 2981832 pod_ready.go:86] duration metric: took 32.505083285s for pod "coredns-66bc5c9577-q4jkl" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.268482 2981832 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.272952 2981832 pod_ready.go:94] pod "etcd-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:27.272984 2981832 pod_ready.go:86] duration metric: took 4.457457ms for pod "etcd-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.275125 2981832 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.279164 2981832 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:27.279191 2981832 pod_ready.go:86] duration metric: took 4.042799ms for pod "kube-apiserver-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.281290 2981832 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.464480 2981832 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:27.464514 2981832 pod_ready.go:86] duration metric: took 183.197917ms for pod "kube-controller-manager-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.667880 2981832 pod_ready.go:83] waiting for pod "kube-proxy-hqvv8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.064166 2981832 pod_ready.go:94] pod "kube-proxy-hqvv8" is "Ready"
	I0908 13:42:28.064195 2981832 pod_ready.go:86] duration metric: took 396.283645ms for pod "kube-proxy-hqvv8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.264446 2981832 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.664751 2981832 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:28.664776 2981832 pod_ready.go:86] duration metric: took 400.261596ms for pod "kube-scheduler-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.664789 2981832 pod_ready.go:40] duration metric: took 33.907967358s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:42:28.722480 2981832 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:42:28.725526 2981832 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-153420" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	19a327be064f3       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   8e8711d46b117       dashboard-metrics-scraper-6ffb444bf9-f6wkz
	65b20a4e85ccd       ba04bb24b9575       9 minutes ago       Running             storage-provisioner         3                   cc5e4403350ad       storage-provisioner
	3fc465279954a       138784d87c9c5       9 minutes ago       Running             coredns                     1                   45a38ca6b2cae       coredns-66bc5c9577-4wnmn
	b21089fdf0605       1611cd07b61d5       9 minutes ago       Running             busybox                     1                   5dd5f0d9d4d75       busybox
	dac710a912bb4       ba04bb24b9575       9 minutes ago       Exited              storage-provisioner         2                   cc5e4403350ad       storage-provisioner
	6e197c1878f39       6fc32d66c1411       9 minutes ago       Running             kube-proxy                  1                   1389278804160       kube-proxy-xsnrj
	0864e28d4ed8f       b1a8c6f707935       9 minutes ago       Running             kindnet-cni                 1                   fac4e59fd6907       kindnet-9flm7
	4ff6224ab16c1       996be7e86d9b3       9 minutes ago       Running             kube-controller-manager     1                   d2225cd1e59f7       kube-controller-manager-embed-certs-026461
	20084fdf88867       a1894772a478e       9 minutes ago       Running             etcd                        1                   6ef792a3df840       etcd-embed-certs-026461
	d10972d8b632a       a25f5ef9c34c3       9 minutes ago       Running             kube-scheduler              1                   bdf56271b7b31       kube-scheduler-embed-certs-026461
	fe40db18432cc       d291939e99406       9 minutes ago       Running             kube-apiserver              1                   b5a0fc931bd66       kube-apiserver-embed-certs-026461
	7496a20db308c       1611cd07b61d5       10 minutes ago      Exited              busybox                     0                   6c7ba793ea9e7       busybox
	fe3cede4fe480       138784d87c9c5       10 minutes ago      Exited              coredns                     0                   ac7bf281c1496       coredns-66bc5c9577-4wnmn
	e213037c3d557       b1a8c6f707935       10 minutes ago      Exited              kindnet-cni                 0                   35ef3580c85e1       kindnet-9flm7
	c5c57e9e7465f       6fc32d66c1411       10 minutes ago      Exited              kube-proxy                  0                   ea590236c7dff       kube-proxy-xsnrj
	b33eda9a841a9       a1894772a478e       10 minutes ago      Exited              etcd                        0                   3f00bfcd04b87       etcd-embed-certs-026461
	6330ead2ac8c8       996be7e86d9b3       10 minutes ago      Exited              kube-controller-manager     0                   06edb7887c5b7       kube-controller-manager-embed-certs-026461
	d28195f477cc9       a25f5ef9c34c3       10 minutes ago      Exited              kube-scheduler              0                   a9240ad5e2b11       kube-scheduler-embed-certs-026461
	d2738e9189b00       d291939e99406       10 minutes ago      Exited              kube-apiserver              0                   f094fea068fd3       kube-apiserver-embed-certs-026461
	
	
	==> containerd <==
	Sep 08 13:41:52 embed-certs-026461 containerd[576]: time="2025-09-08T13:41:52.357952030Z" level=info msg="received exit event container_id:\"8af0f56852120eec89a9907e8e7edd205d75e414971d1cccc26f50807d1a7a06\" id:\"8af0f56852120eec89a9907e8e7edd205d75e414971d1cccc26f50807d1a7a06\" pid:2718 exit_status:255 exited_at:{seconds:1757338912 nanos:356100078}"
	Sep 08 13:41:52 embed-certs-026461 containerd[576]: time="2025-09-08T13:41:52.412420729Z" level=info msg="shim disconnected" id=8af0f56852120eec89a9907e8e7edd205d75e414971d1cccc26f50807d1a7a06 namespace=k8s.io
	Sep 08 13:41:52 embed-certs-026461 containerd[576]: time="2025-09-08T13:41:52.412676869Z" level=warning msg="cleaning up after shim disconnected" id=8af0f56852120eec89a9907e8e7edd205d75e414971d1cccc26f50807d1a7a06 namespace=k8s.io
	Sep 08 13:41:52 embed-certs-026461 containerd[576]: time="2025-09-08T13:41:52.412840031Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:41:53 embed-certs-026461 containerd[576]: time="2025-09-08T13:41:53.062390355Z" level=info msg="RemoveContainer for \"1eee5b8167c334a94d7d509a354445965d2f52061c77f65bf6b7a9939d80a341\""
	Sep 08 13:41:53 embed-certs-026461 containerd[576]: time="2025-09-08T13:41:53.075313111Z" level=info msg="RemoveContainer for \"1eee5b8167c334a94d7d509a354445965d2f52061c77f65bf6b7a9939d80a341\" returns successfully"
	Sep 08 13:44:18 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:18.193590382Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:44:18 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:18.195866988Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:44:18 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:18.338419092Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:44:18 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:18.647148817Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 13:44:18 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:18.647194157Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 08 13:44:20 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:20.194162551Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 08 13:44:20 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:20.199675264Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 08 13:44:20 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:20.201719401Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 08 13:44:20 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:20.201778271Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.198393957Z" level=info msg="CreateContainer within sandbox \"8e8711d46b1172f486ede8dd4c92ae528802192a13c63be3f98ddcd257833589\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.217375066Z" level=info msg="CreateContainer within sandbox \"8e8711d46b1172f486ede8dd4c92ae528802192a13c63be3f98ddcd257833589\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed\""
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.218605994Z" level=info msg="StartContainer for \"19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed\""
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.296050383Z" level=info msg="StartContainer for \"19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed\" returns successfully"
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.309533285Z" level=info msg="received exit event container_id:\"19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed\" id:\"19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed\" pid:2808 exit_status:255 exited_at:{seconds:1757339078 nanos:307442134}"
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.331656581Z" level=info msg="shim disconnected" id=19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed namespace=k8s.io
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.331718864Z" level=warning msg="cleaning up after shim disconnected" id=19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed namespace=k8s.io
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.331778161Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.503038128Z" level=info msg="RemoveContainer for \"8af0f56852120eec89a9907e8e7edd205d75e414971d1cccc26f50807d1a7a06\""
	Sep 08 13:44:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:44:38.515049600Z" level=info msg="RemoveContainer for \"8af0f56852120eec89a9907e8e7edd205d75e414971d1cccc26f50807d1a7a06\" returns successfully"
	
	
	==> coredns [3fc465279954ae02420ccafa28848517fd2c806685cb7505ca34449590f063f3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34998 - 36333 "HINFO IN 1728789598036754601.3188074970806449334. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042540563s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [fe3cede4fe48003419ad4d922a72cd175b10badc10a1731c781055596c825baa] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59382 - 7217 "HINFO IN 323145707175562631.8575395926584091426. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.029866751s
	
	
	==> describe nodes <==
	Name:               embed-certs-026461
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-026461
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=embed-certs-026461
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_37_36_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:37:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-026461
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:48:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:46:14 +0000   Mon, 08 Sep 2025 13:37:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:46:14 +0000   Mon, 08 Sep 2025 13:37:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:46:14 +0000   Mon, 08 Sep 2025 13:37:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:46:14 +0000   Mon, 08 Sep 2025 13:37:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-026461
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 68838359c58240dbbcb101d7b33babff
	  System UUID:                273578e8-238b-44d1-a9ea-2502d8b75f4a
	  Boot ID:                    9f5228b8-b58e-4b72-938a-84f5f7e9d841
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-4wnmn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-embed-certs-026461                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-9flm7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-embed-certs-026461             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-embed-certs-026461    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-xsnrj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-embed-certs-026461             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-wxqrm               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-f6wkz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pfn9p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m40s                  kube-proxy       
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node embed-certs-026461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node embed-certs-026461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node embed-certs-026461 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node embed-certs-026461 event: Registered Node embed-certs-026461 in Controller
	  Normal   Starting                 9m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m48s (x8 over 9m48s)  kubelet          Node embed-certs-026461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m48s (x8 over 9m48s)  kubelet          Node embed-certs-026461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m48s (x7 over 9m48s)  kubelet          Node embed-certs-026461 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m37s                  node-controller  Node embed-certs-026461 event: Registered Node embed-certs-026461 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [20084fdf888674171cdf5ae036c2c159a28fc20c487ff0e04de3384b619ab59c] <==
	{"level":"warn","ts":"2025-09-08T13:38:34.047196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.078872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.101593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.115374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.132935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.157539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.169442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.189264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.225719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.268134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.308217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.342150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.366867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.382087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.398654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.417080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.441427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.457038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.475978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.499696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.534871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.549800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.566995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.676488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:39:47.316066Z","caller":"traceutil/trace.go:172","msg":"trace[2043197692] transaction","detail":"{read_only:false; response_revision:745; number_of_response:1; }","duration":"107.396113ms","start":"2025-09-08T13:39:47.208656Z","end":"2025-09-08T13:39:47.316052Z","steps":["trace[2043197692] 'process raft request'  (duration: 107.319365ms)"],"step_count":1}
	
	
	==> etcd [b33eda9a841a9cd6dabfae54d7049877b124dd1b907cbc696c92b7f9454c0782] <==
	{"level":"warn","ts":"2025-09-08T13:37:29.659069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.688013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.738662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.783090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.814931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.878319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.900402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.944301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.979479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.020676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.051872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.084295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.110110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.202804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.208039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.232826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.283239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.329656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.372413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.414731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.474301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.500183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.559769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.581556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.699316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49152","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:48:18 up 19:30,  0 users,  load average: 0.57, 0.64, 1.05
	Linux embed-certs-026461 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0864e28d4ed8f1c2dfb4f237a05798334cdf825daeaacc02eceb59ce2e52c913] <==
	I0908 13:46:17.809427       1 main.go:301] handling current node
	I0908 13:46:27.816112       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:46:27.816148       1 main.go:301] handling current node
	I0908 13:46:37.807645       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:46:37.807852       1 main.go:301] handling current node
	I0908 13:46:47.814460       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:46:47.814502       1 main.go:301] handling current node
	I0908 13:46:57.814425       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:46:57.814625       1 main.go:301] handling current node
	I0908 13:47:07.808193       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:47:07.808228       1 main.go:301] handling current node
	I0908 13:47:17.810474       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:47:17.810523       1 main.go:301] handling current node
	I0908 13:47:27.816168       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:47:27.816204       1 main.go:301] handling current node
	I0908 13:47:37.807485       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:47:37.807524       1 main.go:301] handling current node
	I0908 13:47:47.814405       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:47:47.814444       1 main.go:301] handling current node
	I0908 13:47:57.814437       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:47:57.814472       1 main.go:301] handling current node
	I0908 13:48:07.810374       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:48:07.810652       1 main.go:301] handling current node
	I0908 13:48:17.810100       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:48:17.810139       1 main.go:301] handling current node
	
	
	==> kindnet [e213037c3d5577450863d222bd09abd0b099c0a413275757e8478e475168e6f3] <==
	I0908 13:37:40.705962       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:37:40.706192       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0908 13:37:40.706313       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:37:40.706330       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:37:40.706396       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:37:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:37:40.906214       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:37:40.906326       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:37:40.909421       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:37:40.909731       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0908 13:37:41.198498       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:37:41.198530       1 metrics.go:72] Registering metrics
	I0908 13:37:41.198590       1 controller.go:711] "Syncing nftables rules"
	I0908 13:37:50.907463       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:37:50.907519       1 main.go:301] handling current node
	I0908 13:38:00.906332       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:38:00.906604       1 main.go:301] handling current node
	I0908 13:38:10.906278       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:38:10.906409       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2738e9189b007c276ce183af874bcd7acf236cb12e77b9760e42726ff6eefbb] <==
	I0908 13:37:35.113624       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0908 13:37:35.124096       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 13:37:38.706087       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 13:37:39.043440       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0908 13:37:39.595638       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:37:39.607157       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0908 13:38:09.999097       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:56954: use of closed network connection
	I0908 13:38:10.833100       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:38:10.841890       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:38:10.841940       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:38:10.842000       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0908 13:38:10.989569       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.102.92.2"}
	W0908 13:38:11.001469       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:38:11.001519       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0908 13:38:11.029942       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:38:11.029997       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [fe40db18432cc24d4aad79981dcd3eed10152204cfaa86fab6a46703c8226021] <==
	I0908 13:43:45.082200       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:44:36.909617       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:44:36.909719       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:44:36.909744       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:44:36.910658       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:44:36.910740       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:44:36.910756       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:44:46.598157       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:44:52.791546       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:45:54.666811       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:45:56.141605       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:46:36.910507       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:46:36.910567       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:46:36.910581       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:46:36.912302       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:46:36.912469       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:46:36.912487       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:47:01.352516       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:47:22.270673       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4ff6224ab16c1cc607d9f8e1c58ff5fdaff97f854f6a7985708c3d78905fcd65] <==
	I0908 13:42:11.327079       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:42:41.272822       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:42:41.334626       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:43:11.277327       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:43:11.342225       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:43:41.282174       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:43:41.348816       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:44:11.286307       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:44:11.356288       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:44:41.291442       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:44:41.363626       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:45:11.295432       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:45:11.371499       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:45:41.300363       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:45:41.379606       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:46:11.306805       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:46:11.386760       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:46:41.311392       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:46:41.394686       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:47:11.315455       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:47:11.403352       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:47:41.320207       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:47:41.410863       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:48:11.326218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:48:11.418711       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [6330ead2ac8c8438ea6f4bb5f9acb7aa56cc82f9b368662b5bb91e80534650db] <==
	I0908 13:37:38.588282       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0908 13:37:38.588299       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 13:37:38.588585       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-026461"
	I0908 13:37:38.596701       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 13:37:38.588650       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 13:37:38.588638       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 13:37:38.588876       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0908 13:37:38.596994       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 13:37:38.597261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:37:38.592017       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 13:37:38.602796       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 13:37:38.604526       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:37:38.604709       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 13:37:38.604792       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 13:37:38.614571       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:37:38.629685       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:37:38.636412       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 13:37:38.637647       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 13:37:38.637654       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 13:37:38.638763       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 13:37:38.638831       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 13:37:38.638888       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 13:37:38.639077       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 13:37:38.641090       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 13:37:38.653387       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [6e197c1878f391fe7b5956a5e92981886600f9b4482ba9a410de004af54407aa] <==
	I0908 13:38:37.603386       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:38:37.692838       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:38:37.793471       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:38:37.793513       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0908 13:38:37.793596       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:38:37.836160       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:38:37.836237       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:38:37.849032       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:38:37.849475       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:38:37.849500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:38:37.856047       1 config.go:200] "Starting service config controller"
	I0908 13:38:37.856067       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:38:37.856125       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:38:37.856132       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:38:37.856146       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:38:37.856150       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:38:37.857285       1 config.go:309] "Starting node config controller"
	I0908 13:38:37.857295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:38:37.857313       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:38:37.956452       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:38:37.956495       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:38:37.956539       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c5c57e9e7465f3b6dcc007639128213d7d29165743b8342185639d44196c0e60] <==
	I0908 13:37:40.457152       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:37:40.576184       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:37:40.676331       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:37:40.676371       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0908 13:37:40.676440       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:37:40.696809       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:37:40.696859       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:37:40.704148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:37:40.704433       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:37:40.704456       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:37:40.707863       1 config.go:200] "Starting service config controller"
	I0908 13:37:40.707885       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:37:40.707920       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:37:40.707929       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:37:40.707950       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:37:40.707957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:37:40.708874       1 config.go:309] "Starting node config controller"
	I0908 13:37:40.708907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:37:40.708914       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:37:40.808836       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:37:40.808872       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:37:40.809036       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d10972d8b632ac1be953d78807b13300d9a41a4eeb13698e89a3e43b29d1dd02] <==
	I0908 13:38:33.152768       1 serving.go:386] Generated self-signed cert in-memory
	I0908 13:38:36.365982       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:38:36.366012       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:38:36.383212       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 13:38:36.383258       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 13:38:36.383298       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:38:36.383305       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:38:36.383318       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:38:36.383324       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:38:36.386416       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:38:36.386461       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:38:36.484228       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:38:36.484291       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:38:36.484235       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kube-scheduler [d28195f477cc9f19233c21d3f187a2ddca9c454ef053874df5af77d0ffc92680] <==
	I0908 13:37:29.633996       1 serving.go:386] Generated self-signed cert in-memory
	I0908 13:37:34.271090       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:37:34.271313       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:37:34.276774       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:37:34.276953       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 13:37:34.277011       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 13:37:34.277083       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:37:34.286502       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:37:34.286687       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:37:34.286774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:37:34.286813       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:37:34.377594       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 13:37:34.387473       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:37:34.387680       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:46:58 embed-certs-026461 kubelet[676]: E0908 13:46:58.193370     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:47:04 embed-certs-026461 kubelet[676]: I0908 13:47:04.192735     676 scope.go:117] "RemoveContainer" containerID="19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed"
	Sep 08 13:47:04 embed-certs-026461 kubelet[676]: E0908 13:47:04.193474     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:47:08 embed-certs-026461 kubelet[676]: E0908 13:47:08.193939     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:47:11 embed-certs-026461 kubelet[676]: E0908 13:47:11.193018     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:47:17 embed-certs-026461 kubelet[676]: I0908 13:47:17.192495     676 scope.go:117] "RemoveContainer" containerID="19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed"
	Sep 08 13:47:17 embed-certs-026461 kubelet[676]: E0908 13:47:17.192779     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:47:21 embed-certs-026461 kubelet[676]: E0908 13:47:21.192973     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:47:23 embed-certs-026461 kubelet[676]: E0908 13:47:23.193279     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:47:30 embed-certs-026461 kubelet[676]: I0908 13:47:30.192961     676 scope.go:117] "RemoveContainer" containerID="19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed"
	Sep 08 13:47:30 embed-certs-026461 kubelet[676]: E0908 13:47:30.193664     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:47:33 embed-certs-026461 kubelet[676]: E0908 13:47:33.192661     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:47:37 embed-certs-026461 kubelet[676]: E0908 13:47:37.192913     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:47:43 embed-certs-026461 kubelet[676]: I0908 13:47:43.191935     676 scope.go:117] "RemoveContainer" containerID="19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed"
	Sep 08 13:47:43 embed-certs-026461 kubelet[676]: E0908 13:47:43.192591     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:47:47 embed-certs-026461 kubelet[676]: E0908 13:47:47.193410     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:47:48 embed-certs-026461 kubelet[676]: E0908 13:47:48.192912     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:47:58 embed-certs-026461 kubelet[676]: I0908 13:47:58.192329     676 scope.go:117] "RemoveContainer" containerID="19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed"
	Sep 08 13:47:58 embed-certs-026461 kubelet[676]: E0908 13:47:58.192524     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:47:59 embed-certs-026461 kubelet[676]: E0908 13:47:59.193552     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:48:02 embed-certs-026461 kubelet[676]: E0908 13:48:02.192769     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:48:10 embed-certs-026461 kubelet[676]: I0908 13:48:10.192285     676 scope.go:117] "RemoveContainer" containerID="19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed"
	Sep 08 13:48:10 embed-certs-026461 kubelet[676]: E0908 13:48:10.192480     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:48:10 embed-certs-026461 kubelet[676]: E0908 13:48:10.193695     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:48:17 embed-certs-026461 kubelet[676]: E0908 13:48:17.193156     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	
	
	==> storage-provisioner [65b20a4e85ccd3a3ac2b3e9c988a6052142c1cd0cbbc417291d379ca6140316e] <==
	W0908 13:47:55.260036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:47:57.263937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:47:57.271011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:47:59.274252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:47:59.278854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:01.284538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:01.290227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:03.293352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:03.297925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:05.300580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:05.305434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:07.308421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:07.312512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:09.315814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:09.320290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:11.326655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:11.331325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:13.334209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:13.339453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:15.342834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:15.347270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:17.350820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:17.356233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:19.360504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:48:19.365195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dac710a912bb41ce6bc10c6eaf19a9182dad19be39b8c19bc10afb37fb140110] <==
	I0908 13:38:37.751605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 13:39:07.764007       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-026461 -n embed-certs-026461
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-026461 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-wxqrm kubernetes-dashboard-855c9754f9-pfn9p
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-026461 describe pod metrics-server-746fcd58dc-wxqrm kubernetes-dashboard-855c9754f9-pfn9p
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-026461 describe pod metrics-server-746fcd58dc-wxqrm kubernetes-dashboard-855c9754f9-pfn9p: exit status 1 (98.101506ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-wxqrm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-pfn9p" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-026461 describe pod metrics-server-746fcd58dc-wxqrm kubernetes-dashboard-855c9754f9-pfn9p: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kqtp5" [1d2664d7-c720-46c0-8112-72e5cac1495d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 13:42:37.128114 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:57.182045 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:04.831370 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:47.956786 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:13.323078 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:41.023793 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:46:34.195265 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:37.127575 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:57.269970 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-08 13:51:29.462505082 +0000 UTC m=+4719.798040196
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 describe po kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-153420 describe po kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-kqtp5
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-153420/192.168.85.2
Start Time:       Mon, 08 Sep 2025 13:41:56 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ts5jn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-ts5jn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason       Age                     From               Message
----     ------       ----                    ----               -------
Normal   Scheduled    9m32s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5 to default-k8s-diff-port-153420
Warning  FailedMount  9m32s                   kubelet            MountVolume.SetUp failed for volume "kube-api-access-ts5jn" : configmap "kube-root-ca.crt" not found
Normal   Pulling      6m39s (x5 over 9m31s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed       6m39s (x5 over 9m27s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed       6m39s (x5 over 9m27s)   kubelet            Error: ErrImagePull
Warning  Failed       4m28s (x20 over 9m27s)  kubelet            Error: ImagePullBackOff
Normal   BackOff      4m13s (x21 over 9m27s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 logs kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-153420 logs kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard: exit status 1 (107.861729ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-kqtp5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-153420 logs kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-153420
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-153420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502",
	        "Created": "2025-09-08T13:39:48.48691874Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2981961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:41:38.454695777Z",
	            "FinishedAt": "2025-09-08T13:41:37.638855437Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502/hosts",
	        "LogPath": "/var/lib/docker/containers/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502-json.log",
	        "Name": "/default-k8s-diff-port-153420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-153420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-153420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502",
	                "LowerDir": "/var/lib/docker/overlay2/e0ab1c43c73c9799fb57efc367fac133b1a1946f0102d5b1b10ed818739a534a-init/diff:/var/lib/docker/overlay2/665ff8bb3d07b8342629df037737f3667c7c59d9d1f85930dc3dfdf138460626/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e0ab1c43c73c9799fb57efc367fac133b1a1946f0102d5b1b10ed818739a534a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e0ab1c43c73c9799fb57efc367fac133b1a1946f0102d5b1b10ed818739a534a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e0ab1c43c73c9799fb57efc367fac133b1a1946f0102d5b1b10ed818739a534a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-153420",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-153420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-153420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-153420",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-153420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bad7a24edabb398db9feb6196f95c5e2623c88b41a055d98160695e34b0a693a",
	            "SandboxKey": "/var/run/docker/netns/bad7a24edabb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36753"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36754"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36757"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36755"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36756"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-153420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:2b:b2:47:fd:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61a8f9ac4016b0639d2c8f94aad29d0c91889227ea926c5b1872fd9bf7d32ad5",
	                    "EndpointID": "f59a51c2a799e7170ae960c5c77223af143d9e8634299ffe7196a4e0629d32ea",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-153420",
	                        "c6555f790141"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-153420 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-153420 logs -n 25: (1.946354964s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                        │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ stop    │ -p no-preload-978911 --alsologtostderr -v=3                                                                                                                                    │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable dashboard -p no-preload-978911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                   │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                  │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:21 UTC │
	│ image   │ old-k8s-version-552634 image list --format=json                                                                                                                                │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-552634 --alsologtostderr -v=1                                                                                                                               │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ unpause │ -p old-k8s-version-552634 --alsologtostderr -v=1                                                                                                                               │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ delete  │ -p old-k8s-version-552634                                                                                                                                                      │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-552634                                                                                                                                                      │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ start   │ -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                   │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-026461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                       │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ stop    │ -p embed-certs-026461 --alsologtostderr -v=3                                                                                                                                   │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-026461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                  │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ start   │ -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                   │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:39 UTC │
	│ image   │ no-preload-978911 image list --format=json                                                                                                                                     │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ pause   │ -p no-preload-978911 --alsologtostderr -v=1                                                                                                                                    │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ unpause │ -p no-preload-978911 --alsologtostderr -v=1                                                                                                                                    │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ delete  │ -p no-preload-978911                                                                                                                                                           │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ delete  │ -p no-preload-978911                                                                                                                                                           │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ delete  │ -p disable-driver-mounts-406618                                                                                                                                                │ disable-driver-mounts-406618 │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ start   │ -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-153420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:41 UTC │
	│ stop    │ -p default-k8s-diff-port-153420 --alsologtostderr -v=3                                                                                                                         │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:41 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-153420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                        │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:41 UTC │
	│ start   │ -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:42 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:41:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:41:38.183493 2981832 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:41:38.183928 2981832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:41:38.183942 2981832 out.go:374] Setting ErrFile to fd 2...
	I0908 13:41:38.183947 2981832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:41:38.184718 2981832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:41:38.185334 2981832 out.go:368] Setting JSON to false
	I0908 13:41:38.186589 2981832 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":69850,"bootTime":1757269048,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 13:41:38.186804 2981832 start.go:140] virtualization:  
	I0908 13:41:38.189846 2981832 out.go:179] * [default-k8s-diff-port-153420] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:41:38.196363 2981832 notify.go:220] Checking for updates...
	I0908 13:41:38.197252 2981832 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:41:38.200580 2981832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:41:38.203670 2981832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:41:38.206693 2981832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 13:41:38.209759 2981832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:41:38.212733 2981832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:41:38.216133 2981832 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:41:38.216703 2981832 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:41:38.244102 2981832 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:41:38.244225 2981832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:41:38.300959 2981832 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:41:38.291668591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:41:38.301106 2981832 docker.go:318] overlay module found
	I0908 13:41:38.306196 2981832 out.go:179] * Using the docker driver based on existing profile
	I0908 13:41:38.309086 2981832 start.go:304] selected driver: docker
	I0908 13:41:38.309108 2981832 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:41:38.309238 2981832 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:41:38.309979 2981832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:41:38.365841 2981832 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:41:38.355845341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:41:38.366183 2981832 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:41:38.366209 2981832 cni.go:84] Creating CNI manager for ""
	I0908 13:41:38.366271 2981832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:41:38.366314 2981832 start.go:348] cluster config:
	{Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:41:38.371332 2981832 out.go:179] * Starting "default-k8s-diff-port-153420" primary control-plane node in "default-k8s-diff-port-153420" cluster
	I0908 13:41:38.374100 2981832 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:41:38.376872 2981832 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:41:38.379608 2981832 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:41:38.379687 2981832 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0908 13:41:38.379700 2981832 cache.go:58] Caching tarball of preloaded images
	I0908 13:41:38.379697 2981832 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:41:38.379828 2981832 preload.go:172] Found /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 13:41:38.379902 2981832 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0908 13:41:38.380061 2981832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/config.json ...
	I0908 13:41:38.401117 2981832 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:41:38.401143 2981832 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:41:38.401157 2981832 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:41:38.401186 2981832 start.go:360] acquireMachinesLock for default-k8s-diff-port-153420: {Name:mk3c73424b25a4402932f8a2bf50cf6e7d938bf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:41:38.401245 2981832 start.go:364] duration metric: took 36.085µs to acquireMachinesLock for "default-k8s-diff-port-153420"
	I0908 13:41:38.401271 2981832 start.go:96] Skipping create...Using existing machine configuration
	I0908 13:41:38.401281 2981832 fix.go:54] fixHost starting: 
	I0908 13:41:38.401625 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:38.419256 2981832 fix.go:112] recreateIfNeeded on default-k8s-diff-port-153420: state=Stopped err=<nil>
	W0908 13:41:38.419293 2981832 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 13:41:38.422593 2981832 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-153420" ...
	I0908 13:41:38.422688 2981832 cli_runner.go:164] Run: docker start default-k8s-diff-port-153420
	I0908 13:41:38.676966 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:38.702603 2981832 kic.go:430] container "default-k8s-diff-port-153420" state is running.
	I0908 13:41:38.703182 2981832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-153420
	I0908 13:41:38.726048 2981832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/config.json ...
	I0908 13:41:38.726267 2981832 machine.go:93] provisionDockerMachine start ...
	I0908 13:41:38.726326 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:38.751024 2981832 main.go:141] libmachine: Using SSH client type: native
	I0908 13:41:38.751413 2981832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36753 <nil> <nil>}
	I0908 13:41:38.751437 2981832 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:41:38.752484 2981832 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0908 13:41:41.881945 2981832 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153420
	
	I0908 13:41:41.881969 2981832 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-153420"
	I0908 13:41:41.882033 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:41.900806 2981832 main.go:141] libmachine: Using SSH client type: native
	I0908 13:41:41.901097 2981832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36753 <nil> <nil>}
	I0908 13:41:41.901113 2981832 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-153420 && echo "default-k8s-diff-port-153420" | sudo tee /etc/hostname
	I0908 13:41:42.043802 2981832 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153420
	
	I0908 13:41:42.043926 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:42.063664 2981832 main.go:141] libmachine: Using SSH client type: native
	I0908 13:41:42.064010 2981832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36753 <nil> <nil>}
	I0908 13:41:42.064034 2981832 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-153420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-153420/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-153420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:41:42.208530 2981832 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:41:42.208619 2981832 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
	I0908 13:41:42.208665 2981832 ubuntu.go:190] setting up certificates
	I0908 13:41:42.208700 2981832 provision.go:84] configureAuth start
	I0908 13:41:42.208801 2981832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-153420
	I0908 13:41:42.241395 2981832 provision.go:143] copyHostCerts
	I0908 13:41:42.241494 2981832 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
	I0908 13:41:42.241528 2981832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
	I0908 13:41:42.241622 2981832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
	I0908 13:41:42.241751 2981832 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
	I0908 13:41:42.241762 2981832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
	I0908 13:41:42.243697 2981832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
	I0908 13:41:42.243885 2981832 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
	I0908 13:41:42.243895 2981832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
	I0908 13:41:42.243955 2981832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
	I0908 13:41:42.244111 2981832 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-153420 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-153420 localhost minikube]
	I0908 13:41:42.945907 2981832 provision.go:177] copyRemoteCerts
	I0908 13:41:42.945982 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:41:42.946022 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:42.967469 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.059401 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 13:41:43.084522 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0908 13:41:43.110008 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 13:41:43.134954 2981832 provision.go:87] duration metric: took 926.213948ms to configureAuth
	I0908 13:41:43.134982 2981832 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:41:43.135265 2981832 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:41:43.135280 2981832 machine.go:96] duration metric: took 4.409005757s to provisionDockerMachine
	I0908 13:41:43.135290 2981832 start.go:293] postStartSetup for "default-k8s-diff-port-153420" (driver="docker")
	I0908 13:41:43.135321 2981832 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:41:43.135393 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:41:43.135440 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.153733 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.247411 2981832 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:41:43.250658 2981832 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:41:43.250692 2981832 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:41:43.250721 2981832 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:41:43.250733 2981832 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:41:43.250744 2981832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
	I0908 13:41:43.250816 2981832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
	I0908 13:41:43.250920 2981832 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
	I0908 13:41:43.251026 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 13:41:43.259545 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:41:43.283800 2981832 start.go:296] duration metric: took 148.481655ms for postStartSetup
	I0908 13:41:43.283920 2981832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:41:43.283981 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.301163 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.391171 2981832 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:41:43.395681 2981832 fix.go:56] duration metric: took 4.994393846s for fixHost
	I0908 13:41:43.395708 2981832 start.go:83] releasing machines lock for "default-k8s-diff-port-153420", held for 4.99444972s
	I0908 13:41:43.395781 2981832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-153420
	I0908 13:41:43.413236 2981832 ssh_runner.go:195] Run: cat /version.json
	I0908 13:41:43.413295 2981832 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:41:43.413379 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.413298 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.435775 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.442467 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.526010 2981832 ssh_runner.go:195] Run: systemctl --version
	I0908 13:41:43.662562 2981832 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:41:43.666918 2981832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 13:41:43.685126 2981832 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:41:43.685200 2981832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:41:43.695329 2981832 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 13:41:43.695405 2981832 start.go:495] detecting cgroup driver to use...
	I0908 13:41:43.695446 2981832 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:41:43.695512 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 13:41:43.710570 2981832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 13:41:43.722733 2981832 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:41:43.722852 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:41:43.736500 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:41:43.748238 2981832 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:41:43.835148 2981832 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:41:43.920806 2981832 docker.go:234] disabling docker service ...
	I0908 13:41:43.920891 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:41:43.934002 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:41:43.946701 2981832 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:41:44.039460 2981832 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:41:44.132792 2981832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:41:44.145057 2981832 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:41:44.162337 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 13:41:44.172997 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 13:41:44.183329 2981832 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 13:41:44.183447 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 13:41:44.195372 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:41:44.205520 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 13:41:44.215693 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:41:44.225574 2981832 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:41:44.234906 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 13:41:44.245066 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 13:41:44.259873 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 13:41:44.271354 2981832 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:41:44.280087 2981832 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:41:44.289321 2981832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:44.379774 2981832 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 13:41:44.536929 2981832 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 13:41:44.537068 2981832 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 13:41:44.541466 2981832 start.go:563] Will wait 60s for crictl version
	I0908 13:41:44.541578 2981832 ssh_runner.go:195] Run: which crictl
	I0908 13:41:44.546581 2981832 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:41:44.587566 2981832 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 13:41:44.587635 2981832 ssh_runner.go:195] Run: containerd --version
	I0908 13:41:44.617501 2981832 ssh_runner.go:195] Run: containerd --version
	I0908 13:41:44.645009 2981832 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 13:41:44.647977 2981832 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-153420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:41:44.664925 2981832 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0908 13:41:44.668554 2981832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:41:44.679714 2981832 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:41:44.679841 2981832 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:41:44.679903 2981832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:41:44.717857 2981832 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:41:44.717878 2981832 containerd.go:534] Images already preloaded, skipping extraction
	I0908 13:41:44.717938 2981832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:41:44.752831 2981832 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:41:44.752857 2981832 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:41:44.752865 2981832 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.0 containerd true true} ...
	I0908 13:41:44.752966 2981832 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-153420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:41:44.753037 2981832 ssh_runner.go:195] Run: sudo crictl info
	I0908 13:41:44.791842 2981832 cni.go:84] Creating CNI manager for ""
	I0908 13:41:44.791869 2981832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:41:44.791879 2981832 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:41:44.791900 2981832 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-153420 NodeName:default-k8s-diff-port-153420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:41:44.792040 2981832 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-153420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:41:44.792121 2981832 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:41:44.801483 2981832 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:41:44.801559 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:41:44.810818 2981832 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0908 13:41:44.829172 2981832 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:41:44.852071 2981832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I0908 13:41:44.870014 2981832 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:41:44.873389 2981832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:41:44.884321 2981832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:44.973680 2981832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:41:44.987906 2981832 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420 for IP: 192.168.85.2
	I0908 13:41:44.987944 2981832 certs.go:194] generating shared ca certs ...
	I0908 13:41:44.987960 2981832 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:44.988134 2981832 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
	I0908 13:41:44.988187 2981832 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
	I0908 13:41:44.988199 2981832 certs.go:256] generating profile certs ...
	I0908 13:41:44.988304 2981832 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.key
	I0908 13:41:44.988383 2981832 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/apiserver.key.92270838
	I0908 13:41:44.988425 2981832 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/proxy-client.key
	I0908 13:41:44.988557 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
	W0908 13:41:44.988595 2981832 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
	I0908 13:41:44.988615 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 13:41:44.988650 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
	I0908 13:41:44.988688 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:41:44.988714 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
	I0908 13:41:44.988776 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:41:44.989412 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:41:45.025156 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 13:41:45.059536 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:41:45.096817 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:41:45.136081 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0908 13:41:45.171754 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 13:41:45.227729 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:41:45.277742 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 13:41:45.355221 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
	I0908 13:41:45.413854 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
	I0908 13:41:45.446934 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:41:45.474684 2981832 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:41:45.502534 2981832 ssh_runner.go:195] Run: openssl version
	I0908 13:41:45.508817 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
	I0908 13:41:45.520778 2981832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
	I0908 13:41:45.525122 2981832 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/27511142.pem
	I0908 13:41:45.525226 2981832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
	I0908 13:41:45.533361 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:41:45.543369 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:41:45.555876 2981832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:41:45.560388 2981832 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:41:45.560477 2981832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:41:45.568121 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:41:45.579965 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
	I0908 13:41:45.590008 2981832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
	I0908 13:41:45.593729 2981832 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/2751114.pem
	I0908 13:41:45.593823 2981832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
	I0908 13:41:45.602488 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
	I0908 13:41:45.611965 2981832 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:41:45.616750 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 13:41:45.624097 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 13:41:45.632225 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 13:41:45.639956 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 13:41:45.647406 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 13:41:45.655098 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 13:41:45.664382 2981832 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:41:45.664530 2981832 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 13:41:45.664630 2981832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:41:45.715127 2981832 cri.go:89] found id: "c92bcab10a87ad0871a640aa80b2152b1fd54da7db385c67cbfd5b5c9f43fe8e"
	I0908 13:41:45.715202 2981832 cri.go:89] found id: "e2230598fc5039a765624af1e3f63ed99d4770d693871426274fab0e4fc0ea30"
	I0908 13:41:45.715221 2981832 cri.go:89] found id: "c83f774f7cb15ec989c3b862182b8ada28a7212590cabe73828ad3cbc652e5c9"
	I0908 13:41:45.715240 2981832 cri.go:89] found id: "1e493f2b781b56da57b2e825d02a2a9f27a28c4bc9bd840c5b5e3a91641a57fd"
	I0908 13:41:45.715266 2981832 cri.go:89] found id: "560003c0df9ab37f0413fad8d4cfa177c37cab99ce50a679a4dfe3d45f9b9806"
	I0908 13:41:45.715288 2981832 cri.go:89] found id: "80d5d4731aabeab6901bda592deb55b198b5210763d8c88d00d21c2622dc391c"
	I0908 13:41:45.715304 2981832 cri.go:89] found id: "b4369cf69b787c103dc54593695e0c7b6fd035f36d82f02245556ec1854ce8d4"
	I0908 13:41:45.715318 2981832 cri.go:89] found id: "45662498456929054ae3b40fdbb16f05b31d0498f1bdb935defcd60bd0a73b31"
	I0908 13:41:45.715334 2981832 cri.go:89] found id: ""
	I0908 13:41:45.715407 2981832 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0908 13:41:45.728111 2981832 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-08T13:41:45Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0908 13:41:45.728309 2981832 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:41:45.744855 2981832 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 13:41:45.744925 2981832 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 13:41:45.745008 2981832 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 13:41:45.758017 2981832 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 13:41:45.758974 2981832 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-153420" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:41:45.759633 2981832 kubeconfig.go:62] /home/jenkins/minikube-integration/21508-2749258/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-153420" cluster setting kubeconfig missing "default-k8s-diff-port-153420" context setting]
	I0908 13:41:45.760482 2981832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:45.762167 2981832 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 13:41:45.773698 2981832 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0908 13:41:45.773774 2981832 kubeadm.go:593] duration metric: took 28.831006ms to restartPrimaryControlPlane
	I0908 13:41:45.773801 2981832 kubeadm.go:394] duration metric: took 109.426992ms to StartCluster
	I0908 13:41:45.773842 2981832 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:45.773919 2981832 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:41:45.775548 2981832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:45.775848 2981832 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 13:41:45.776240 2981832 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:41:45.776314 2981832 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 13:41:45.776388 2981832 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.776402 2981832 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.776408 2981832 addons.go:247] addon storage-provisioner should already be in state true
	I0908 13:41:45.776427 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.776719 2981832 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.776737 2981832 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-153420"
	I0908 13:41:45.777016 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.777428 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.777850 2981832 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.777973 2981832 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.778000 2981832 addons.go:247] addon dashboard should already be in state true
	I0908 13:41:45.778036 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.778631 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.783534 2981832 out.go:179] * Verifying Kubernetes components...
	I0908 13:41:45.777899 2981832 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.785885 2981832 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.785898 2981832 addons.go:247] addon metrics-server should already be in state true
	I0908 13:41:45.785942 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.788526 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.803815 2981832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:45.817447 2981832 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.817470 2981832 addons.go:247] addon default-storageclass should already be in state true
	I0908 13:41:45.817494 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.817919 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.882494 2981832 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 13:41:45.885825 2981832 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:41:45.888756 2981832 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 13:41:45.888823 2981832 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:45.888838 2981832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:41:45.888902 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.891659 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 13:41:45.891687 2981832 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 13:41:45.891765 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.897849 2981832 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 13:41:45.906583 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:41:45.906609 2981832 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:41:45.906686 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.912845 2981832 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:45.912867 2981832 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:41:45.912951 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.944790 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:45.963396 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:45.981736 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:45.987651 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:46.038283 2981832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:41:46.097701 2981832 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-153420" to be "Ready" ...
	I0908 13:41:46.202930 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 13:41:46.203003 2981832 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 13:41:46.219579 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:46.255728 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:46.277898 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:41:46.277973 2981832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 13:41:46.293406 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 13:41:46.293483 2981832 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 13:41:46.444761 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:41:46.444839 2981832 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:41:46.452013 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 13:41:46.452088 2981832 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 13:41:46.723006 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 13:41:46.723031 2981832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 13:41:46.737594 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:46.737618 2981832 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:41:46.844771 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 13:41:46.844810 2981832 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 13:41:46.880658 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:46.970510 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 13:41:46.970552 2981832 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 13:41:47.075018 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 13:41:47.075057 2981832 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 13:41:47.138830 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 13:41:47.138860 2981832 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 13:41:47.224905 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:41:47.224945 2981832 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 13:41:47.269920 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:41:51.416689 2981832 node_ready.go:49] node "default-k8s-diff-port-153420" is "Ready"
	I0908 13:41:51.416720 2981832 node_ready.go:38] duration metric: took 5.318929962s for node "default-k8s-diff-port-153420" to be "Ready" ...
	I0908 13:41:51.416734 2981832 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:41:51.416805 2981832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:41:54.129335 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.909675131s)
	I0908 13:41:54.129395 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.873575587s)
	I0908 13:41:54.216295 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.335593788s)
	I0908 13:41:54.216325 2981832 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-153420"
	I0908 13:41:54.216424 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.946470309s)
	I0908 13:41:54.216652 2981832 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.799818943s)
	I0908 13:41:54.216671 2981832 api_server.go:72] duration metric: took 8.440772445s to wait for apiserver process to appear ...
	I0908 13:41:54.216678 2981832 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:41:54.216704 2981832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0908 13:41:54.219870 2981832 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-153420 addons enable metrics-server
	
	I0908 13:41:54.222703 2981832 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 13:41:54.225583 2981832 addons.go:514] duration metric: took 8.449262837s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 13:41:54.226611 2981832 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 13:41:54.226646 2981832 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 13:41:54.717240 2981832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0908 13:41:54.726476 2981832 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0908 13:41:54.727542 2981832 api_server.go:141] control plane version: v1.34.0
	I0908 13:41:54.727563 2981832 api_server.go:131] duration metric: took 510.878431ms to wait for apiserver health ...
	I0908 13:41:54.727573 2981832 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:41:54.731655 2981832 system_pods.go:59] 9 kube-system pods found
	I0908 13:41:54.731697 2981832 system_pods.go:61] "coredns-66bc5c9577-q4jkl" [9b1fa2ae-9787-4693-8166-a084bcb3270d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:54.731711 2981832 system_pods.go:61] "etcd-default-k8s-diff-port-153420" [85041d04-c452-4816-a3a3-65374cf50610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:41:54.731718 2981832 system_pods.go:61] "kindnet-pmj4f" [82339286-acd0-449d-b09e-c551b7e26217] Running
	I0908 13:41:54.731727 2981832 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153420" [d256a46a-4fad-4315-b1aa-852ba091f72d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:41:54.731735 2981832 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153420" [e5053ecb-6919-45b7-8476-c9a25c10ecd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:41:54.731749 2981832 system_pods.go:61] "kube-proxy-hqvv8" [a208e720-5648-4b6a-ab64-cde29e26ce5f] Running
	I0908 13:41:54.731756 2981832 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153420" [5599901d-5667-4d46-910b-6c8b97d28411] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:41:54.731763 2981832 system_pods.go:61] "metrics-server-746fcd58dc-5wgrj" [0d04be90-cd69-469c-a678-e605db608f3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:54.731770 2981832 system_pods.go:61] "storage-provisioner" [bdcd8ab1-174b-4b90-b1ee-15e8c109514a] Running
	I0908 13:41:54.731776 2981832 system_pods.go:74] duration metric: took 4.197197ms to wait for pod list to return data ...
	I0908 13:41:54.731784 2981832 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:41:54.734255 2981832 default_sa.go:45] found service account: "default"
	I0908 13:41:54.734279 2981832 default_sa.go:55] duration metric: took 2.485556ms for default service account to be created ...
	I0908 13:41:54.734289 2981832 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:41:54.737366 2981832 system_pods.go:86] 9 kube-system pods found
	I0908 13:41:54.737411 2981832 system_pods.go:89] "coredns-66bc5c9577-q4jkl" [9b1fa2ae-9787-4693-8166-a084bcb3270d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:54.737421 2981832 system_pods.go:89] "etcd-default-k8s-diff-port-153420" [85041d04-c452-4816-a3a3-65374cf50610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:41:54.737427 2981832 system_pods.go:89] "kindnet-pmj4f" [82339286-acd0-449d-b09e-c551b7e26217] Running
	I0908 13:41:54.737434 2981832 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-153420" [d256a46a-4fad-4315-b1aa-852ba091f72d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:41:54.737446 2981832 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-153420" [e5053ecb-6919-45b7-8476-c9a25c10ecd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:41:54.737453 2981832 system_pods.go:89] "kube-proxy-hqvv8" [a208e720-5648-4b6a-ab64-cde29e26ce5f] Running
	I0908 13:41:54.737460 2981832 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-153420" [5599901d-5667-4d46-910b-6c8b97d28411] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:41:54.737470 2981832 system_pods.go:89] "metrics-server-746fcd58dc-5wgrj" [0d04be90-cd69-469c-a678-e605db608f3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:54.737483 2981832 system_pods.go:89] "storage-provisioner" [bdcd8ab1-174b-4b90-b1ee-15e8c109514a] Running
	I0908 13:41:54.737497 2981832 system_pods.go:126] duration metric: took 3.202144ms to wait for k8s-apps to be running ...
	I0908 13:41:54.737504 2981832 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:41:54.737565 2981832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:41:54.749866 2981832 system_svc.go:56] duration metric: took 12.349515ms WaitForService to wait for kubelet
	I0908 13:41:54.749891 2981832 kubeadm.go:578] duration metric: took 8.97399128s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:41:54.749910 2981832 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:41:54.753038 2981832 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 13:41:54.753079 2981832 node_conditions.go:123] node cpu capacity is 2
	I0908 13:41:54.753097 2981832 node_conditions.go:105] duration metric: took 3.179704ms to run NodePressure ...
	I0908 13:41:54.753110 2981832 start.go:241] waiting for startup goroutines ...
	I0908 13:41:54.753120 2981832 start.go:246] waiting for cluster config update ...
	I0908 13:41:54.753134 2981832 start.go:255] writing updated cluster config ...
	I0908 13:41:54.753431 2981832 ssh_runner.go:195] Run: rm -f paused
	I0908 13:41:54.756792 2981832 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:41:54.760701 2981832 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q4jkl" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 13:41:56.771105 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:41:59.266096 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:01.268327 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:03.765282 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:05.766762 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:08.266832 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:10.267442 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:12.765755 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:14.766606 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:17.266472 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:19.766587 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:22.265694 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:24.266392 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:26.766225 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	I0908 13:42:27.265787 2981832 pod_ready.go:94] pod "coredns-66bc5c9577-q4jkl" is "Ready"
	I0908 13:42:27.265819 2981832 pod_ready.go:86] duration metric: took 32.505083285s for pod "coredns-66bc5c9577-q4jkl" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.268482 2981832 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.272952 2981832 pod_ready.go:94] pod "etcd-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:27.272984 2981832 pod_ready.go:86] duration metric: took 4.457457ms for pod "etcd-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.275125 2981832 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.279164 2981832 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:27.279191 2981832 pod_ready.go:86] duration metric: took 4.042799ms for pod "kube-apiserver-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.281290 2981832 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.464480 2981832 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:27.464514 2981832 pod_ready.go:86] duration metric: took 183.197917ms for pod "kube-controller-manager-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.667880 2981832 pod_ready.go:83] waiting for pod "kube-proxy-hqvv8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.064166 2981832 pod_ready.go:94] pod "kube-proxy-hqvv8" is "Ready"
	I0908 13:42:28.064195 2981832 pod_ready.go:86] duration metric: took 396.283645ms for pod "kube-proxy-hqvv8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.264446 2981832 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.664751 2981832 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:28.664776 2981832 pod_ready.go:86] duration metric: took 400.261596ms for pod "kube-scheduler-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.664789 2981832 pod_ready.go:40] duration metric: took 33.907967358s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:42:28.722480 2981832 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:42:28.725526 2981832 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-153420" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c5409ca9f6838       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   49598c4430788       dashboard-metrics-scraper-6ffb444bf9-vqcnl
	afd51157a2796       ba04bb24b9575       8 minutes ago       Running             storage-provisioner         2                   5e3df7e102b84       storage-provisioner
	5932df6e2b405       138784d87c9c5       9 minutes ago       Running             coredns                     1                   51cf25b2438a7       coredns-66bc5c9577-q4jkl
	656877048dd4b       1611cd07b61d5       9 minutes ago       Running             busybox                     1                   d84100c9117ec       busybox
	27e765ee1007c       ba04bb24b9575       9 minutes ago       Exited              storage-provisioner         1                   5e3df7e102b84       storage-provisioner
	4195b54b5b0a0       6fc32d66c1411       9 minutes ago       Running             kube-proxy                  1                   dab1f4a92dfc6       kube-proxy-hqvv8
	70096c3780988       b1a8c6f707935       9 minutes ago       Running             kindnet-cni                 1                   ce0e6d115f32d       kindnet-pmj4f
	c47ed1a29ab5d       a25f5ef9c34c3       9 minutes ago       Running             kube-scheduler              1                   73062379e656d       kube-scheduler-default-k8s-diff-port-153420
	492498c3db6a0       a1894772a478e       9 minutes ago       Running             etcd                        1                   2abaf138462ab       etcd-default-k8s-diff-port-153420
	8e83b460fd12c       996be7e86d9b3       9 minutes ago       Running             kube-controller-manager     1                   8c4dccbbe72da       kube-controller-manager-default-k8s-diff-port-153420
	c34a3fdb0a185       d291939e99406       9 minutes ago       Running             kube-apiserver              1                   e63385c5c0e8a       kube-apiserver-default-k8s-diff-port-153420
	a22e36e812d1d       1611cd07b61d5       10 minutes ago      Exited              busybox                     0                   16bd4360afd4a       busybox
	c92bcab10a87a       138784d87c9c5       10 minutes ago      Exited              coredns                     0                   2853b9c0457ae       coredns-66bc5c9577-q4jkl
	c83f774f7cb15       6fc32d66c1411       11 minutes ago      Exited              kube-proxy                  0                   529eff64180e1       kube-proxy-hqvv8
	1e493f2b781b5       b1a8c6f707935       11 minutes ago      Exited              kindnet-cni                 0                   862b9957c2425       kindnet-pmj4f
	560003c0df9ab       a25f5ef9c34c3       11 minutes ago      Exited              kube-scheduler              0                   531cfaddc4c41       kube-scheduler-default-k8s-diff-port-153420
	80d5d4731aabe       a1894772a478e       11 minutes ago      Exited              etcd                        0                   ec40d54622748       etcd-default-k8s-diff-port-153420
	b4369cf69b787       996be7e86d9b3       11 minutes ago      Exited              kube-controller-manager     0                   d88e303662fdf       kube-controller-manager-default-k8s-diff-port-153420
	4566249845692       d291939e99406       11 minutes ago      Exited              kube-apiserver              0                   b32e5ed6cf61f       kube-apiserver-default-k8s-diff-port-153420
	
	
	==> containerd <==
	Sep 08 13:45:06 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:45:06.280280314Z" level=info msg="StartContainer for \"9e7f749f6838a37c1db1fd3105f605c8c684372e16102d92345128c6cfd92018\" returns successfully"
	Sep 08 13:45:06 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:45:06.316717885Z" level=info msg="shim disconnected" id=9e7f749f6838a37c1db1fd3105f605c8c684372e16102d92345128c6cfd92018 namespace=k8s.io
	Sep 08 13:45:06 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:45:06.316768451Z" level=warning msg="cleaning up after shim disconnected" id=9e7f749f6838a37c1db1fd3105f605c8c684372e16102d92345128c6cfd92018 namespace=k8s.io
	Sep 08 13:45:06 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:45:06.316803749Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:45:07 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:45:07.025082882Z" level=info msg="RemoveContainer for \"585b9e6e772a47b903514da7247ce7b0996abb190542578fa26d7d92006ab904\""
	Sep 08 13:45:07 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:45:07.034258326Z" level=info msg="RemoveContainer for \"585b9e6e772a47b903514da7247ce7b0996abb190542578fa26d7d92006ab904\" returns successfully"
	Sep 08 13:47:26 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:26.198591751Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 08 13:47:26 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:26.208814890Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 08 13:47:26 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:26.211042531Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 08 13:47:26 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:26.211072741Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 08 13:47:42 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:42.198903601Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:47:42 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:42.201588927Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:47:42 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:42.382887622Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:47:42 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:42.683027849Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 13:47:42 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:42.683079630Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.201255072Z" level=info msg="CreateContainer within sandbox \"49598c4430788a9d119f7b7ab2c6706371a94b754f1b213f97db32619efb77bf\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.222228738Z" level=info msg="CreateContainer within sandbox \"49598c4430788a9d119f7b7ab2c6706371a94b754f1b213f97db32619efb77bf\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b\""
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.223735310Z" level=info msg="StartContainer for \"c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b\""
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.282109328Z" level=info msg="StartContainer for \"c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b\" returns successfully"
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.282311734Z" level=info msg="received exit event container_id:\"c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b\" id:\"c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b\" pid:2796 exit_status:255 exited_at:{seconds:1757339272 nanos:280945121}"
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.307726635Z" level=info msg="shim disconnected" id=c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b namespace=k8s.io
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.307916439Z" level=warning msg="cleaning up after shim disconnected" id=c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b namespace=k8s.io
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.307968343Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.454042136Z" level=info msg="RemoveContainer for \"9e7f749f6838a37c1db1fd3105f605c8c684372e16102d92345128c6cfd92018\""
	Sep 08 13:47:52 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:47:52.461345144Z" level=info msg="RemoveContainer for \"9e7f749f6838a37c1db1fd3105f605c8c684372e16102d92345128c6cfd92018\" returns successfully"
	
	
	==> coredns [5932df6e2b405dddbb51b36a68d76e388c1f6515d5ffe6bceda39f8ccd97e16b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33147 - 49043 "HINFO IN 5771053844341058191.1937657061796243996. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015527834s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [c92bcab10a87ad0871a640aa80b2152b1fd54da7db385c67cbfd5b5c9f43fe8e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35341 - 23060 "HINFO IN 6672547217869353072.3588437659851635492. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039998722s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-153420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-153420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=default-k8s-diff-port-153420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_40_13_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:40:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-153420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:51:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:48:38 +0000   Mon, 08 Sep 2025 13:40:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:48:38 +0000   Mon, 08 Sep 2025 13:40:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:48:38 +0000   Mon, 08 Sep 2025 13:40:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:48:38 +0000   Mon, 08 Sep 2025 13:40:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-153420
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0597b83646154e3a8b13dcf3c073e745
	  System UUID:                7ee334b3-e360-4fac-af6e-409ec1db3f12
	  Boot ID:                    9f5228b8-b58e-4b72-938a-84f5f7e9d841
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-q4jkl                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-default-k8s-diff-port-153420                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-pmj4f                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-default-k8s-diff-port-153420             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-153420    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-hqvv8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-default-k8s-diff-port-153420             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-746fcd58dc-5wgrj                         100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vqcnl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kqtp5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 9m37s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                    node-controller  Node default-k8s-diff-port-153420 event: Registered Node default-k8s-diff-port-153420 in Controller
	  Normal   Starting                 9m46s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m46s (x8 over 9m46s)  kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m46s (x8 over 9m46s)  kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m46s (x7 over 9m46s)  kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m35s                  node-controller  Node default-k8s-diff-port-153420 event: Registered Node default-k8s-diff-port-153420 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [492498c3db6a02a8e65f38b9e66bd193f09ff7919b4f8541bf2885a912547779] <==
	{"level":"warn","ts":"2025-09-08T13:41:49.528624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.559608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.576024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.595908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.619676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.645582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.663511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.677688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.719397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.741765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.758878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.782591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.813925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.836444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.849451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.871546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.896446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.918597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.940943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.962679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.990769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:50.046764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:50.066460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:50.079397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:50.150493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38550","server-name":"","error":"EOF"}
	
	
	==> etcd [80d5d4731aabeab6901bda592deb55b198b5210763d8c88d00d21c2622dc391c] <==
	{"level":"warn","ts":"2025-09-08T13:40:08.338958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.351905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.368152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.387629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.405176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.424084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.449136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.461621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.538731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.552130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.574819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.588356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.606252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.625370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.661828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.669186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.693039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.713551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.729613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.750735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.772942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.800753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.819321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.840042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.941700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46108","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:51:31 up 19:34,  0 users,  load average: 0.36, 0.54, 0.93
	Linux default-k8s-diff-port-153420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1e493f2b781b56da57b2e825d02a2a9f27a28c4bc9bd840c5b5e3a91641a57fd] <==
	I0908 13:40:18.909621       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:40:18.910936       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0908 13:40:18.911098       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:40:18.911116       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:40:18.911128       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:40:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:40:19.198890       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:40:19.199103       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:40:19.199203       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:40:19.202239       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0908 13:40:49.114492       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0908 13:40:49.199179       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0908 13:40:49.201740       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0908 13:40:49.203897       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0908 13:40:50.401201       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:40:50.401232       1 metrics.go:72] Registering metrics
	I0908 13:40:50.401306       1 controller.go:711] "Syncing nftables rules"
	I0908 13:40:59.118426       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:40:59.118483       1 main.go:301] handling current node
	I0908 13:41:09.121559       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:41:09.121597       1 main.go:301] handling current node
	I0908 13:41:19.114475       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:41:19.114505       1 main.go:301] handling current node
	
	
	==> kindnet [70096c3780988aad95584c16901486f94adad14bb3aa2c833fda07b0908c74a2] <==
	I0908 13:49:23.720957       1 main.go:301] handling current node
	I0908 13:49:33.718874       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:49:33.718970       1 main.go:301] handling current node
	I0908 13:49:43.713267       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:49:43.713299       1 main.go:301] handling current node
	I0908 13:49:53.713177       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:49:53.713214       1 main.go:301] handling current node
	I0908 13:50:03.717099       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:50:03.717137       1 main.go:301] handling current node
	I0908 13:50:13.713265       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:50:13.713299       1 main.go:301] handling current node
	I0908 13:50:23.718417       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:50:23.718453       1 main.go:301] handling current node
	I0908 13:50:33.718619       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:50:33.718825       1 main.go:301] handling current node
	I0908 13:50:43.714505       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:50:43.714542       1 main.go:301] handling current node
	I0908 13:50:53.720316       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:50:53.720354       1 main.go:301] handling current node
	I0908 13:51:03.719985       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:51:03.720021       1 main.go:301] handling current node
	I0908 13:51:13.712901       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:51:13.712966       1 main.go:301] handling current node
	I0908 13:51:23.721833       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:51:23.721867       1 main.go:301] handling current node
	
	
	==> kube-apiserver [45662498456929054ae3b40fdbb16f05b31d0498f1bdb935defcd60bd0a73b31] <==
	I0908 13:40:12.499860       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 13:40:17.030815       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:40:17.038530       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:40:17.479401       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 13:40:17.892873       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0908 13:41:16.871392       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 13:41:24.854106       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:57768: use of closed network connection
	I0908 13:41:25.637791       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:41:25.646616       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:41:25.646678       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:41:25.646729       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0908 13:41:25.799622       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.101.146.193"}
	W0908 13:41:25.831434       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:41:25.831492       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0908 13:41:25.856652       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:41:25.856704       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [c34a3fdb0a1851debb2ad4bb26c0242412b330f7610f2cf5f7f1f1d21c2d5230] <==
	I0908 13:47:14.333957       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:47:52.338167       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:47:52.338225       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:47:52.338240       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:47:52.340293       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:47:52.340371       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:47:52.340381       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:48:07.082391       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:48:36.899868       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:49:33.208307       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:49:52.338964       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:49:52.339042       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:49:52.339064       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:49:52.341182       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:49:52.341364       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:49:52.341383       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:50:00.768754       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:50:38.422125       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:51:13.788095       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [8e83b460fd12c11caabaccc49bab4239aaaf8183b1f6a397ef6d4615f6075ca8] <==
	I0908 13:45:26.896566       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:45:56.852150       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:45:56.904963       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:46:26.855960       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:46:26.912577       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:46:56.869221       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:46:56.920514       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:47:26.879378       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:47:26.927974       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:47:56.884272       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:47:56.934959       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:48:26.889047       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:48:26.942694       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:48:56.896180       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:48:56.950224       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:49:26.900958       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:49:26.957890       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:49:56.906398       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:49:56.966923       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:50:26.910798       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:50:26.973808       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:50:56.923766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:50:56.982088       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:51:26.928700       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:51:26.992696       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [b4369cf69b787c103dc54593695e0c7b6fd035f36d82f02245556ec1854ce8d4] <==
	I0908 13:40:16.872883       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 13:40:16.873239       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 13:40:16.873629       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 13:40:16.873987       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 13:40:16.873644       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 13:40:16.875846       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 13:40:16.875921       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 13:40:16.875947       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 13:40:16.876139       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 13:40:16.877605       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 13:40:16.881083       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0908 13:40:16.881154       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0908 13:40:16.881182       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 13:40:16.881231       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 13:40:16.881243       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 13:40:16.881249       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 13:40:16.881815       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 13:40:16.891193       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-153420" podCIDRs=["10.244.0.0/24"]
	I0908 13:40:16.905512       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:40:16.915804       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0908 13:40:16.921674       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:40:16.921697       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 13:40:16.921704       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 13:40:16.923783       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 13:40:16.924295       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [4195b54b5b0a07dcb07b8c63a6551ef910c664c9fb580ab96eea8a50122fd9d9] <==
	I0908 13:41:53.634456       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:41:53.732937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:41:53.833753       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:41:53.833795       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0908 13:41:53.833874       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:41:53.985161       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:41:53.985214       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:41:53.995651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:41:53.996020       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:41:53.996043       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:41:53.997223       1 config.go:200] "Starting service config controller"
	I0908 13:41:53.997238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:41:54.006839       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:41:54.006876       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:41:54.006937       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:41:54.006949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:41:54.015533       1 config.go:309] "Starting node config controller"
	I0908 13:41:54.015563       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:41:54.015571       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:41:54.098262       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:41:54.107130       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:41:54.107173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c83f774f7cb15ec989c3b862182b8ada28a7212590cabe73828ad3cbc652e5c9] <==
	I0908 13:40:18.904828       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:40:19.014183       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:40:19.114952       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:40:19.115032       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0908 13:40:19.115102       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:40:19.137502       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:40:19.137735       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:40:19.141741       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:40:19.143612       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:40:19.147158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:40:19.148754       1 config.go:200] "Starting service config controller"
	I0908 13:40:19.148764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:40:19.148777       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:40:19.148781       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:40:19.148795       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:40:19.148798       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:40:19.152366       1 config.go:309] "Starting node config controller"
	I0908 13:40:19.159742       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:40:19.159824       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:40:19.249931       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:40:19.250171       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 13:40:19.249868       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [560003c0df9ab37f0413fad8d4cfa177c37cab99ce50a679a4dfe3d45f9b9806] <==
	E0908 13:40:10.061570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:40:10.061968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 13:40:10.062175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 13:40:10.062463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0908 13:40:10.064515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:40:10.064854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 13:40:10.065067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:40:10.065455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:40:10.065655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:40:10.065870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:40:10.066117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 13:40:10.066492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:40:10.066688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 13:40:10.068636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 13:40:10.889343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:40:10.926051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:40:10.927938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 13:40:10.931914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:40:10.974020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 13:40:11.030684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 13:40:11.035457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:40:11.045750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 13:40:11.099010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:40:11.296650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0908 13:40:13.125024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c47ed1a29ab5d937dcdc5fb5186575edfb00bccfda6c84f0d9428103484bf684] <==
	I0908 13:41:51.134333       1 serving.go:386] Generated self-signed cert in-memory
	I0908 13:41:52.685007       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:41:52.685214       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:41:52.701110       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:41:52.701181       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 13:41:52.701213       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 13:41:52.701334       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:41:52.701350       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:41:52.701291       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:41:52.701550       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:41:52.701566       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:41:52.802897       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:41:52.802957       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 13:41:52.803049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:50:13 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:13.197756     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:50:20 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:20.198309     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 13:50:24 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:24.197991     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 13:50:26 default-k8s-diff-port-153420 kubelet[662]: I0908 13:50:26.196997     662 scope.go:117] "RemoveContainer" containerID="c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b"
	Sep 08 13:50:26 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:26.197248     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:50:31 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:31.198045     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 13:50:37 default-k8s-diff-port-153420 kubelet[662]: I0908 13:50:37.199190     662 scope.go:117] "RemoveContainer" containerID="c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b"
	Sep 08 13:50:37 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:37.200234     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:50:37 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:37.200608     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 13:50:43 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:43.198148     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 13:50:48 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:48.197740     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 13:50:49 default-k8s-diff-port-153420 kubelet[662]: I0908 13:50:49.197012     662 scope.go:117] "RemoveContainer" containerID="c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b"
	Sep 08 13:50:49 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:49.197191     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:50:56 default-k8s-diff-port-153420 kubelet[662]: E0908 13:50:56.197923     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 13:51:03 default-k8s-diff-port-153420 kubelet[662]: I0908 13:51:03.197198     662 scope.go:117] "RemoveContainer" containerID="c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b"
	Sep 08 13:51:03 default-k8s-diff-port-153420 kubelet[662]: E0908 13:51:03.197809     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:51:03 default-k8s-diff-port-153420 kubelet[662]: E0908 13:51:03.199469     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 13:51:09 default-k8s-diff-port-153420 kubelet[662]: E0908 13:51:09.199050     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 13:51:14 default-k8s-diff-port-153420 kubelet[662]: I0908 13:51:14.197215     662 scope.go:117] "RemoveContainer" containerID="c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b"
	Sep 08 13:51:14 default-k8s-diff-port-153420 kubelet[662]: E0908 13:51:14.198202     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 13:51:14 default-k8s-diff-port-153420 kubelet[662]: E0908 13:51:14.198999     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:51:22 default-k8s-diff-port-153420 kubelet[662]: E0908 13:51:22.198126     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 13:51:25 default-k8s-diff-port-153420 kubelet[662]: I0908 13:51:25.197450     662 scope.go:117] "RemoveContainer" containerID="c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b"
	Sep 08 13:51:25 default-k8s-diff-port-153420 kubelet[662]: E0908 13:51:25.197632     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:51:26 default-k8s-diff-port-153420 kubelet[662]: E0908 13:51:26.197801     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	
	
	==> storage-provisioner [27e765ee1007c29808a576a6c1ad0934eff58f69c16eae2340cc76c4d969d1b9] <==
	I0908 13:41:53.678623       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 13:42:23.680287       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [afd51157a2796b9198ad66e7adc0e148bfcc67af22e2524e29b920137c01342b] <==
	W0908 13:51:06.193523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:08.196948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:08.201946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:10.204875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:10.211655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:12.214918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:12.219552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:14.223197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:14.229933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:16.232815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:16.237213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:18.239867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:18.246703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:20.250158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:20.255017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:22.257791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:22.262577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:24.265176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:24.269905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:26.272908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:26.279826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:28.283242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:28.287829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:30.291649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:30.297601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-5wgrj kubernetes-dashboard-855c9754f9-kqtp5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 describe pod metrics-server-746fcd58dc-5wgrj kubernetes-dashboard-855c9754f9-kqtp5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-153420 describe pod metrics-server-746fcd58dc-5wgrj kubernetes-dashboard-855c9754f9-kqtp5: exit status 1 (85.480376ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-5wgrj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-kqtp5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-153420 describe pod metrics-server-746fcd58dc-5wgrj kubernetes-dashboard-855c9754f9-kqtp5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pfn9p" [39d2d704-928d-4791-9c0d-9ec08f500ce1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 13:48:47.956901 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:50:13.323663 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-026461 -n embed-certs-026461
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-08 13:57:20.451331379 +0000 UTC m=+5070.786866460
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-026461 describe po kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-026461 describe po kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-pfn9p
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-026461/192.168.76.2
Start Time:       Mon, 08 Sep 2025 13:38:41 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z4gmf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-z4gmf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p to embed-certs-026461
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m31s (x64 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m31s (x64 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-026461 logs kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-026461 logs kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard: exit status 1 (106.304916ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-pfn9p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-026461 logs kubernetes-dashboard-855c9754f9-pfn9p -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-026461 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-026461
helpers_test.go:243: (dbg) docker inspect embed-certs-026461:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc",
	        "Created": "2025-09-08T13:37:08.502145223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2973398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:38:23.710884187Z",
	            "FinishedAt": "2025-09-08T13:38:22.859667311Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc/hosts",
	        "LogPath": "/var/lib/docker/containers/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc/4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc-json.log",
	        "Name": "/embed-certs-026461",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-026461:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-026461",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4689353bfe63ed0343c7afc0ee03d6bb199ba9cd57d69f671c8f71662648b7cc",
	                "LowerDir": "/var/lib/docker/overlay2/fc702e533bca44ff9da292bc30c0d03c44c5a7f8c7dde34c7f5fe9612679d56b-init/diff:/var/lib/docker/overlay2/665ff8bb3d07b8342629df037737f3667c7c59d9d1f85930dc3dfdf138460626/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc702e533bca44ff9da292bc30c0d03c44c5a7f8c7dde34c7f5fe9612679d56b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc702e533bca44ff9da292bc30c0d03c44c5a7f8c7dde34c7f5fe9612679d56b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc702e533bca44ff9da292bc30c0d03c44c5a7f8c7dde34c7f5fe9612679d56b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-026461",
	                "Source": "/var/lib/docker/volumes/embed-certs-026461/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-026461",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-026461",
	                "name.minikube.sigs.k8s.io": "embed-certs-026461",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ddd62593d3e3d337468b52f16616d44ac2518e880ec849c39110d012099c89a7",
	            "SandboxKey": "/var/run/docker/netns/ddd62593d3e3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36743"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36744"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36747"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36745"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36746"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-026461": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:f0:05:8d:98:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "004e010c32c3a805df6c59891f6cd9ea228937a7febd6161ebfd8e3248f7c98d",
	                    "EndpointID": "f2c55bea3e5e3c645c141ea58adeaf79f52205363cf7ccadc9f097cd4db315d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-026461",
	                        "4689353bfe63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-026461 -n embed-certs-026461
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-026461 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-026461 logs -n 25: (1.855220442s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                        │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ stop    │ -p no-preload-978911 --alsologtostderr -v=3                                                                                                                                    │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ addons  │ enable dashboard -p no-preload-978911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                   │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:20 UTC │
	│ start   │ -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                  │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:20 UTC │ 08 Sep 25 13:21 UTC │
	│ image   │ old-k8s-version-552634 image list --format=json                                                                                                                                │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-552634 --alsologtostderr -v=1                                                                                                                               │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ unpause │ -p old-k8s-version-552634 --alsologtostderr -v=1                                                                                                                               │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:36 UTC │ 08 Sep 25 13:36 UTC │
	│ delete  │ -p old-k8s-version-552634                                                                                                                                                      │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-552634                                                                                                                                                      │ old-k8s-version-552634       │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ start   │ -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                   │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:37 UTC │ 08 Sep 25 13:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-026461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                       │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ stop    │ -p embed-certs-026461 --alsologtostderr -v=3                                                                                                                                   │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-026461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                  │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:38 UTC │
	│ start   │ -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                   │ embed-certs-026461           │ jenkins │ v1.36.0 │ 08 Sep 25 13:38 UTC │ 08 Sep 25 13:39 UTC │
	│ image   │ no-preload-978911 image list --format=json                                                                                                                                     │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ pause   │ -p no-preload-978911 --alsologtostderr -v=1                                                                                                                                    │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ unpause │ -p no-preload-978911 --alsologtostderr -v=1                                                                                                                                    │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ delete  │ -p no-preload-978911                                                                                                                                                           │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ delete  │ -p no-preload-978911                                                                                                                                                           │ no-preload-978911            │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ delete  │ -p disable-driver-mounts-406618                                                                                                                                                │ disable-driver-mounts-406618 │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:39 UTC │
	│ start   │ -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:39 UTC │ 08 Sep 25 13:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-153420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:41 UTC │
	│ stop    │ -p default-k8s-diff-port-153420 --alsologtostderr -v=3                                                                                                                         │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:41 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-153420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                        │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:41 UTC │
	│ start   │ -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-153420 │ jenkins │ v1.36.0 │ 08 Sep 25 13:41 UTC │ 08 Sep 25 13:42 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:41:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:41:38.183493 2981832 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:41:38.183928 2981832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:41:38.183942 2981832 out.go:374] Setting ErrFile to fd 2...
	I0908 13:41:38.183947 2981832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:41:38.184718 2981832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:41:38.185334 2981832 out.go:368] Setting JSON to false
	I0908 13:41:38.186589 2981832 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":69850,"bootTime":1757269048,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 13:41:38.186804 2981832 start.go:140] virtualization:  
	I0908 13:41:38.189846 2981832 out.go:179] * [default-k8s-diff-port-153420] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:41:38.196363 2981832 notify.go:220] Checking for updates...
	I0908 13:41:38.197252 2981832 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:41:38.200580 2981832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:41:38.203670 2981832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:41:38.206693 2981832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 13:41:38.209759 2981832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:41:38.212733 2981832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:41:38.216133 2981832 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:41:38.216703 2981832 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:41:38.244102 2981832 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:41:38.244225 2981832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:41:38.300959 2981832 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:41:38.291668591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:41:38.301106 2981832 docker.go:318] overlay module found
	I0908 13:41:38.306196 2981832 out.go:179] * Using the docker driver based on existing profile
	I0908 13:41:38.309086 2981832 start.go:304] selected driver: docker
	I0908 13:41:38.309108 2981832 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:41:38.309238 2981832 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:41:38.309979 2981832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:41:38.365841 2981832 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:41:38.355845341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:41:38.366183 2981832 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:41:38.366209 2981832 cni.go:84] Creating CNI manager for ""
	I0908 13:41:38.366271 2981832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:41:38.366314 2981832 start.go:348] cluster config:
	{Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:41:38.371332 2981832 out.go:179] * Starting "default-k8s-diff-port-153420" primary control-plane node in "default-k8s-diff-port-153420" cluster
	I0908 13:41:38.374100 2981832 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:41:38.376872 2981832 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:41:38.379608 2981832 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:41:38.379687 2981832 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0908 13:41:38.379700 2981832 cache.go:58] Caching tarball of preloaded images
	I0908 13:41:38.379697 2981832 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:41:38.379828 2981832 preload.go:172] Found /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 13:41:38.379902 2981832 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0908 13:41:38.380061 2981832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/config.json ...
	I0908 13:41:38.401117 2981832 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:41:38.401143 2981832 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:41:38.401157 2981832 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:41:38.401186 2981832 start.go:360] acquireMachinesLock for default-k8s-diff-port-153420: {Name:mk3c73424b25a4402932f8a2bf50cf6e7d938bf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:41:38.401245 2981832 start.go:364] duration metric: took 36.085µs to acquireMachinesLock for "default-k8s-diff-port-153420"
	I0908 13:41:38.401271 2981832 start.go:96] Skipping create...Using existing machine configuration
	I0908 13:41:38.401281 2981832 fix.go:54] fixHost starting: 
	I0908 13:41:38.401625 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:38.419256 2981832 fix.go:112] recreateIfNeeded on default-k8s-diff-port-153420: state=Stopped err=<nil>
	W0908 13:41:38.419293 2981832 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 13:41:38.422593 2981832 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-153420" ...
	I0908 13:41:38.422688 2981832 cli_runner.go:164] Run: docker start default-k8s-diff-port-153420
	I0908 13:41:38.676966 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:38.702603 2981832 kic.go:430] container "default-k8s-diff-port-153420" state is running.
	I0908 13:41:38.703182 2981832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-153420
	I0908 13:41:38.726048 2981832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/config.json ...
	I0908 13:41:38.726267 2981832 machine.go:93] provisionDockerMachine start ...
	I0908 13:41:38.726326 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:38.751024 2981832 main.go:141] libmachine: Using SSH client type: native
	I0908 13:41:38.751413 2981832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36753 <nil> <nil>}
	I0908 13:41:38.751437 2981832 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:41:38.752484 2981832 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0908 13:41:41.881945 2981832 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153420
	
	I0908 13:41:41.881969 2981832 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-153420"
	I0908 13:41:41.882033 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:41.900806 2981832 main.go:141] libmachine: Using SSH client type: native
	I0908 13:41:41.901097 2981832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36753 <nil> <nil>}
	I0908 13:41:41.901113 2981832 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-153420 && echo "default-k8s-diff-port-153420" | sudo tee /etc/hostname
	I0908 13:41:42.043802 2981832 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153420
	
	I0908 13:41:42.043926 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:42.063664 2981832 main.go:141] libmachine: Using SSH client type: native
	I0908 13:41:42.064010 2981832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36753 <nil> <nil>}
	I0908 13:41:42.064034 2981832 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-153420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-153420/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-153420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:41:42.208530 2981832 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:41:42.208619 2981832 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
	I0908 13:41:42.208665 2981832 ubuntu.go:190] setting up certificates
	I0908 13:41:42.208700 2981832 provision.go:84] configureAuth start
	I0908 13:41:42.208801 2981832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-153420
	I0908 13:41:42.241395 2981832 provision.go:143] copyHostCerts
	I0908 13:41:42.241494 2981832 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
	I0908 13:41:42.241528 2981832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
	I0908 13:41:42.241622 2981832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
	I0908 13:41:42.241751 2981832 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
	I0908 13:41:42.241762 2981832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
	I0908 13:41:42.243697 2981832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
	I0908 13:41:42.243885 2981832 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
	I0908 13:41:42.243895 2981832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
	I0908 13:41:42.243955 2981832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
	I0908 13:41:42.244111 2981832 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-153420 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-153420 localhost minikube]
	I0908 13:41:42.945907 2981832 provision.go:177] copyRemoteCerts
	I0908 13:41:42.945982 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:41:42.946022 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:42.967469 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.059401 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 13:41:43.084522 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0908 13:41:43.110008 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 13:41:43.134954 2981832 provision.go:87] duration metric: took 926.213948ms to configureAuth
	I0908 13:41:43.134982 2981832 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:41:43.135265 2981832 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:41:43.135280 2981832 machine.go:96] duration metric: took 4.409005757s to provisionDockerMachine
	I0908 13:41:43.135290 2981832 start.go:293] postStartSetup for "default-k8s-diff-port-153420" (driver="docker")
	I0908 13:41:43.135321 2981832 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:41:43.135393 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:41:43.135440 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.153733 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.247411 2981832 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:41:43.250658 2981832 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:41:43.250692 2981832 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:41:43.250721 2981832 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:41:43.250733 2981832 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:41:43.250744 2981832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
	I0908 13:41:43.250816 2981832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
	I0908 13:41:43.250920 2981832 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
	I0908 13:41:43.251026 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 13:41:43.259545 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:41:43.283800 2981832 start.go:296] duration metric: took 148.481655ms for postStartSetup
	I0908 13:41:43.283920 2981832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:41:43.283981 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.301163 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.391171 2981832 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:41:43.395681 2981832 fix.go:56] duration metric: took 4.994393846s for fixHost
	I0908 13:41:43.395708 2981832 start.go:83] releasing machines lock for "default-k8s-diff-port-153420", held for 4.99444972s
	I0908 13:41:43.395781 2981832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-153420
	I0908 13:41:43.413236 2981832 ssh_runner.go:195] Run: cat /version.json
	I0908 13:41:43.413295 2981832 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:41:43.413379 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.413298 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:43.435775 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.442467 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:43.526010 2981832 ssh_runner.go:195] Run: systemctl --version
	I0908 13:41:43.662562 2981832 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:41:43.666918 2981832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 13:41:43.685126 2981832 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:41:43.685200 2981832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:41:43.695329 2981832 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 13:41:43.695405 2981832 start.go:495] detecting cgroup driver to use...
	I0908 13:41:43.695446 2981832 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:41:43.695512 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 13:41:43.710570 2981832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 13:41:43.722733 2981832 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:41:43.722852 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:41:43.736500 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:41:43.748238 2981832 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:41:43.835148 2981832 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:41:43.920806 2981832 docker.go:234] disabling docker service ...
	I0908 13:41:43.920891 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:41:43.934002 2981832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:41:43.946701 2981832 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:41:44.039460 2981832 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:41:44.132792 2981832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:41:44.145057 2981832 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:41:44.162337 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 13:41:44.172997 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 13:41:44.183329 2981832 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 13:41:44.183447 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 13:41:44.195372 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:41:44.205520 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 13:41:44.215693 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:41:44.225574 2981832 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:41:44.234906 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 13:41:44.245066 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 13:41:44.259873 2981832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 13:41:44.271354 2981832 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:41:44.280087 2981832 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:41:44.289321 2981832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:44.379774 2981832 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 13:41:44.536929 2981832 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 13:41:44.537068 2981832 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 13:41:44.541466 2981832 start.go:563] Will wait 60s for crictl version
	I0908 13:41:44.541578 2981832 ssh_runner.go:195] Run: which crictl
	I0908 13:41:44.546581 2981832 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:41:44.587566 2981832 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 13:41:44.587635 2981832 ssh_runner.go:195] Run: containerd --version
	I0908 13:41:44.617501 2981832 ssh_runner.go:195] Run: containerd --version
	I0908 13:41:44.645009 2981832 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 13:41:44.647977 2981832 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-153420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:41:44.664925 2981832 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0908 13:41:44.668554 2981832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:41:44.679714 2981832 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:41:44.679841 2981832 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:41:44.679903 2981832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:41:44.717857 2981832 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:41:44.717878 2981832 containerd.go:534] Images already preloaded, skipping extraction
	I0908 13:41:44.717938 2981832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:41:44.752831 2981832 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:41:44.752857 2981832 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:41:44.752865 2981832 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.0 containerd true true} ...
	I0908 13:41:44.752966 2981832 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-153420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:41:44.753037 2981832 ssh_runner.go:195] Run: sudo crictl info
	I0908 13:41:44.791842 2981832 cni.go:84] Creating CNI manager for ""
	I0908 13:41:44.791869 2981832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:41:44.791879 2981832 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:41:44.791900 2981832 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-153420 NodeName:default-k8s-diff-port-153420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:41:44.792040 2981832 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-153420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:41:44.792121 2981832 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:41:44.801483 2981832 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:41:44.801559 2981832 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:41:44.810818 2981832 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0908 13:41:44.829172 2981832 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:41:44.852071 2981832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I0908 13:41:44.870014 2981832 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:41:44.873389 2981832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:41:44.884321 2981832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:44.973680 2981832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:41:44.987906 2981832 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420 for IP: 192.168.85.2
	I0908 13:41:44.987944 2981832 certs.go:194] generating shared ca certs ...
	I0908 13:41:44.987960 2981832 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:44.988134 2981832 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
	I0908 13:41:44.988187 2981832 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
	I0908 13:41:44.988199 2981832 certs.go:256] generating profile certs ...
	I0908 13:41:44.988304 2981832 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.key
	I0908 13:41:44.988383 2981832 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/apiserver.key.92270838
	I0908 13:41:44.988425 2981832 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/proxy-client.key
	I0908 13:41:44.988557 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
	W0908 13:41:44.988595 2981832 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
	I0908 13:41:44.988615 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 13:41:44.988650 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
	I0908 13:41:44.988688 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:41:44.988714 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
	I0908 13:41:44.988776 2981832 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:41:44.989412 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:41:45.025156 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 13:41:45.059536 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:41:45.096817 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:41:45.136081 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0908 13:41:45.171754 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 13:41:45.227729 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:41:45.277742 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 13:41:45.355221 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
	I0908 13:41:45.413854 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
	I0908 13:41:45.446934 2981832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:41:45.474684 2981832 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:41:45.502534 2981832 ssh_runner.go:195] Run: openssl version
	I0908 13:41:45.508817 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
	I0908 13:41:45.520778 2981832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
	I0908 13:41:45.525122 2981832 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/27511142.pem
	I0908 13:41:45.525226 2981832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
	I0908 13:41:45.533361 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:41:45.543369 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:41:45.555876 2981832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:41:45.560388 2981832 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:41:45.560477 2981832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:41:45.568121 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:41:45.579965 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
	I0908 13:41:45.590008 2981832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
	I0908 13:41:45.593729 2981832 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/2751114.pem
	I0908 13:41:45.593823 2981832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
	I0908 13:41:45.602488 2981832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
	I0908 13:41:45.611965 2981832 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:41:45.616750 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 13:41:45.624097 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 13:41:45.632225 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 13:41:45.639956 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 13:41:45.647406 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 13:41:45.655098 2981832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 13:41:45.664382 2981832 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-153420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-153420 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:41:45.664530 2981832 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 13:41:45.664630 2981832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:41:45.715127 2981832 cri.go:89] found id: "c92bcab10a87ad0871a640aa80b2152b1fd54da7db385c67cbfd5b5c9f43fe8e"
	I0908 13:41:45.715202 2981832 cri.go:89] found id: "e2230598fc5039a765624af1e3f63ed99d4770d693871426274fab0e4fc0ea30"
	I0908 13:41:45.715221 2981832 cri.go:89] found id: "c83f774f7cb15ec989c3b862182b8ada28a7212590cabe73828ad3cbc652e5c9"
	I0908 13:41:45.715240 2981832 cri.go:89] found id: "1e493f2b781b56da57b2e825d02a2a9f27a28c4bc9bd840c5b5e3a91641a57fd"
	I0908 13:41:45.715266 2981832 cri.go:89] found id: "560003c0df9ab37f0413fad8d4cfa177c37cab99ce50a679a4dfe3d45f9b9806"
	I0908 13:41:45.715288 2981832 cri.go:89] found id: "80d5d4731aabeab6901bda592deb55b198b5210763d8c88d00d21c2622dc391c"
	I0908 13:41:45.715304 2981832 cri.go:89] found id: "b4369cf69b787c103dc54593695e0c7b6fd035f36d82f02245556ec1854ce8d4"
	I0908 13:41:45.715318 2981832 cri.go:89] found id: "45662498456929054ae3b40fdbb16f05b31d0498f1bdb935defcd60bd0a73b31"
	I0908 13:41:45.715334 2981832 cri.go:89] found id: ""
	I0908 13:41:45.715407 2981832 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0908 13:41:45.728111 2981832 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-08T13:41:45Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0908 13:41:45.728309 2981832 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:41:45.744855 2981832 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 13:41:45.744925 2981832 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 13:41:45.745008 2981832 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 13:41:45.758017 2981832 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 13:41:45.758974 2981832 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-153420" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:41:45.759633 2981832 kubeconfig.go:62] /home/jenkins/minikube-integration/21508-2749258/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-153420" cluster setting kubeconfig missing "default-k8s-diff-port-153420" context setting]
	I0908 13:41:45.760482 2981832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:45.762167 2981832 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 13:41:45.773698 2981832 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0908 13:41:45.773774 2981832 kubeadm.go:593] duration metric: took 28.831006ms to restartPrimaryControlPlane
	I0908 13:41:45.773801 2981832 kubeadm.go:394] duration metric: took 109.426992ms to StartCluster
	I0908 13:41:45.773842 2981832 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:45.773919 2981832 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:41:45.775548 2981832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:45.775848 2981832 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 13:41:45.776240 2981832 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:41:45.776314 2981832 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 13:41:45.776388 2981832 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.776402 2981832 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.776408 2981832 addons.go:247] addon storage-provisioner should already be in state true
	I0908 13:41:45.776427 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.776719 2981832 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.776737 2981832 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-153420"
	I0908 13:41:45.777016 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.777428 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.777850 2981832 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.777973 2981832 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.778000 2981832 addons.go:247] addon dashboard should already be in state true
	I0908 13:41:45.778036 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.778631 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.783534 2981832 out.go:179] * Verifying Kubernetes components...
	I0908 13:41:45.777899 2981832 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-153420"
	I0908 13:41:45.785885 2981832 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.785898 2981832 addons.go:247] addon metrics-server should already be in state true
	I0908 13:41:45.785942 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.788526 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.803815 2981832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:45.817447 2981832 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-153420"
	W0908 13:41:45.817470 2981832 addons.go:247] addon default-storageclass should already be in state true
	I0908 13:41:45.817494 2981832 host.go:66] Checking if "default-k8s-diff-port-153420" exists ...
	I0908 13:41:45.817919 2981832 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-153420 --format={{.State.Status}}
	I0908 13:41:45.882494 2981832 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 13:41:45.885825 2981832 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:41:45.888756 2981832 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 13:41:45.888823 2981832 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:45.888838 2981832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:41:45.888902 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.891659 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 13:41:45.891687 2981832 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 13:41:45.891765 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.897849 2981832 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 13:41:45.906583 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:41:45.906609 2981832 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:41:45.906686 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.912845 2981832 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:45.912867 2981832 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:41:45.912951 2981832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-153420
	I0908 13:41:45.944790 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:45.963396 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:45.981736 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:45.987651 2981832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36753 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/default-k8s-diff-port-153420/id_rsa Username:docker}
	I0908 13:41:46.038283 2981832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:41:46.097701 2981832 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-153420" to be "Ready" ...
	I0908 13:41:46.202930 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 13:41:46.203003 2981832 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 13:41:46.219579 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:46.255728 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:46.277898 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:41:46.277973 2981832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 13:41:46.293406 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 13:41:46.293483 2981832 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 13:41:46.444761 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:41:46.444839 2981832 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:41:46.452013 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 13:41:46.452088 2981832 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 13:41:46.723006 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 13:41:46.723031 2981832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 13:41:46.737594 2981832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:46.737618 2981832 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:41:46.844771 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 13:41:46.844810 2981832 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 13:41:46.880658 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:46.970510 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 13:41:46.970552 2981832 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 13:41:47.075018 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 13:41:47.075057 2981832 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 13:41:47.138830 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 13:41:47.138860 2981832 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 13:41:47.224905 2981832 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:41:47.224945 2981832 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 13:41:47.269920 2981832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 13:41:51.416689 2981832 node_ready.go:49] node "default-k8s-diff-port-153420" is "Ready"
	I0908 13:41:51.416720 2981832 node_ready.go:38] duration metric: took 5.318929962s for node "default-k8s-diff-port-153420" to be "Ready" ...
	I0908 13:41:51.416734 2981832 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:41:51.416805 2981832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:41:54.129335 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.909675131s)
	I0908 13:41:54.129395 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.873575587s)
	I0908 13:41:54.216295 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.335593788s)
	I0908 13:41:54.216325 2981832 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-153420"
	I0908 13:41:54.216424 2981832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.946470309s)
	I0908 13:41:54.216652 2981832 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.799818943s)
	I0908 13:41:54.216671 2981832 api_server.go:72] duration metric: took 8.440772445s to wait for apiserver process to appear ...
	I0908 13:41:54.216678 2981832 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:41:54.216704 2981832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0908 13:41:54.219870 2981832 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-153420 addons enable metrics-server
	
	I0908 13:41:54.222703 2981832 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 13:41:54.225583 2981832 addons.go:514] duration metric: took 8.449262837s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 13:41:54.226611 2981832 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 13:41:54.226646 2981832 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 13:41:54.717240 2981832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0908 13:41:54.726476 2981832 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0908 13:41:54.727542 2981832 api_server.go:141] control plane version: v1.34.0
	I0908 13:41:54.727563 2981832 api_server.go:131] duration metric: took 510.878431ms to wait for apiserver health ...
	I0908 13:41:54.727573 2981832 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:41:54.731655 2981832 system_pods.go:59] 9 kube-system pods found
	I0908 13:41:54.731697 2981832 system_pods.go:61] "coredns-66bc5c9577-q4jkl" [9b1fa2ae-9787-4693-8166-a084bcb3270d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:54.731711 2981832 system_pods.go:61] "etcd-default-k8s-diff-port-153420" [85041d04-c452-4816-a3a3-65374cf50610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:41:54.731718 2981832 system_pods.go:61] "kindnet-pmj4f" [82339286-acd0-449d-b09e-c551b7e26217] Running
	I0908 13:41:54.731727 2981832 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153420" [d256a46a-4fad-4315-b1aa-852ba091f72d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:41:54.731735 2981832 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153420" [e5053ecb-6919-45b7-8476-c9a25c10ecd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:41:54.731749 2981832 system_pods.go:61] "kube-proxy-hqvv8" [a208e720-5648-4b6a-ab64-cde29e26ce5f] Running
	I0908 13:41:54.731756 2981832 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153420" [5599901d-5667-4d46-910b-6c8b97d28411] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:41:54.731763 2981832 system_pods.go:61] "metrics-server-746fcd58dc-5wgrj" [0d04be90-cd69-469c-a678-e605db608f3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:54.731770 2981832 system_pods.go:61] "storage-provisioner" [bdcd8ab1-174b-4b90-b1ee-15e8c109514a] Running
	I0908 13:41:54.731776 2981832 system_pods.go:74] duration metric: took 4.197197ms to wait for pod list to return data ...
	I0908 13:41:54.731784 2981832 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:41:54.734255 2981832 default_sa.go:45] found service account: "default"
	I0908 13:41:54.734279 2981832 default_sa.go:55] duration metric: took 2.485556ms for default service account to be created ...
	I0908 13:41:54.734289 2981832 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:41:54.737366 2981832 system_pods.go:86] 9 kube-system pods found
	I0908 13:41:54.737411 2981832 system_pods.go:89] "coredns-66bc5c9577-q4jkl" [9b1fa2ae-9787-4693-8166-a084bcb3270d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:54.737421 2981832 system_pods.go:89] "etcd-default-k8s-diff-port-153420" [85041d04-c452-4816-a3a3-65374cf50610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 13:41:54.737427 2981832 system_pods.go:89] "kindnet-pmj4f" [82339286-acd0-449d-b09e-c551b7e26217] Running
	I0908 13:41:54.737434 2981832 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-153420" [d256a46a-4fad-4315-b1aa-852ba091f72d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:41:54.737446 2981832 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-153420" [e5053ecb-6919-45b7-8476-c9a25c10ecd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 13:41:54.737453 2981832 system_pods.go:89] "kube-proxy-hqvv8" [a208e720-5648-4b6a-ab64-cde29e26ce5f] Running
	I0908 13:41:54.737460 2981832 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-153420" [5599901d-5667-4d46-910b-6c8b97d28411] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 13:41:54.737470 2981832 system_pods.go:89] "metrics-server-746fcd58dc-5wgrj" [0d04be90-cd69-469c-a678-e605db608f3a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:54.737483 2981832 system_pods.go:89] "storage-provisioner" [bdcd8ab1-174b-4b90-b1ee-15e8c109514a] Running
	I0908 13:41:54.737497 2981832 system_pods.go:126] duration metric: took 3.202144ms to wait for k8s-apps to be running ...
	I0908 13:41:54.737504 2981832 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:41:54.737565 2981832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:41:54.749866 2981832 system_svc.go:56] duration metric: took 12.349515ms WaitForService to wait for kubelet
	I0908 13:41:54.749891 2981832 kubeadm.go:578] duration metric: took 8.97399128s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:41:54.749910 2981832 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:41:54.753038 2981832 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 13:41:54.753079 2981832 node_conditions.go:123] node cpu capacity is 2
	I0908 13:41:54.753097 2981832 node_conditions.go:105] duration metric: took 3.179704ms to run NodePressure ...
	I0908 13:41:54.753110 2981832 start.go:241] waiting for startup goroutines ...
	I0908 13:41:54.753120 2981832 start.go:246] waiting for cluster config update ...
	I0908 13:41:54.753134 2981832 start.go:255] writing updated cluster config ...
	I0908 13:41:54.753431 2981832 ssh_runner.go:195] Run: rm -f paused
	I0908 13:41:54.756792 2981832 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:41:54.760701 2981832 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q4jkl" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 13:41:56.771105 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:41:59.266096 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:01.268327 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:03.765282 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:05.766762 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:08.266832 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:10.267442 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:12.765755 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:14.766606 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:17.266472 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:19.766587 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:22.265694 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:24.266392 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	W0908 13:42:26.766225 2981832 pod_ready.go:104] pod "coredns-66bc5c9577-q4jkl" is not "Ready", error: <nil>
	I0908 13:42:27.265787 2981832 pod_ready.go:94] pod "coredns-66bc5c9577-q4jkl" is "Ready"
	I0908 13:42:27.265819 2981832 pod_ready.go:86] duration metric: took 32.505083285s for pod "coredns-66bc5c9577-q4jkl" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.268482 2981832 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.272952 2981832 pod_ready.go:94] pod "etcd-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:27.272984 2981832 pod_ready.go:86] duration metric: took 4.457457ms for pod "etcd-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.275125 2981832 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.279164 2981832 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:27.279191 2981832 pod_ready.go:86] duration metric: took 4.042799ms for pod "kube-apiserver-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.281290 2981832 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.464480 2981832 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:27.464514 2981832 pod_ready.go:86] duration metric: took 183.197917ms for pod "kube-controller-manager-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:27.667880 2981832 pod_ready.go:83] waiting for pod "kube-proxy-hqvv8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.064166 2981832 pod_ready.go:94] pod "kube-proxy-hqvv8" is "Ready"
	I0908 13:42:28.064195 2981832 pod_ready.go:86] duration metric: took 396.283645ms for pod "kube-proxy-hqvv8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.264446 2981832 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.664751 2981832 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-153420" is "Ready"
	I0908 13:42:28.664776 2981832 pod_ready.go:86] duration metric: took 400.261596ms for pod "kube-scheduler-default-k8s-diff-port-153420" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:42:28.664789 2981832 pod_ready.go:40] duration metric: took 33.907967358s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:42:28.722480 2981832 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:42:28.725526 2981832 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-153420" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c481054b410b8       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   8                   8e8711d46b117       dashboard-metrics-scraper-6ffb444bf9-f6wkz
	65b20a4e85ccd       ba04bb24b9575       18 minutes ago      Running             storage-provisioner         3                   cc5e4403350ad       storage-provisioner
	3fc465279954a       138784d87c9c5       18 minutes ago      Running             coredns                     1                   45a38ca6b2cae       coredns-66bc5c9577-4wnmn
	b21089fdf0605       1611cd07b61d5       18 minutes ago      Running             busybox                     1                   5dd5f0d9d4d75       busybox
	dac710a912bb4       ba04bb24b9575       18 minutes ago      Exited              storage-provisioner         2                   cc5e4403350ad       storage-provisioner
	6e197c1878f39       6fc32d66c1411       18 minutes ago      Running             kube-proxy                  1                   1389278804160       kube-proxy-xsnrj
	0864e28d4ed8f       b1a8c6f707935       18 minutes ago      Running             kindnet-cni                 1                   fac4e59fd6907       kindnet-9flm7
	4ff6224ab16c1       996be7e86d9b3       18 minutes ago      Running             kube-controller-manager     1                   d2225cd1e59f7       kube-controller-manager-embed-certs-026461
	20084fdf88867       a1894772a478e       18 minutes ago      Running             etcd                        1                   6ef792a3df840       etcd-embed-certs-026461
	d10972d8b632a       a25f5ef9c34c3       18 minutes ago      Running             kube-scheduler              1                   bdf56271b7b31       kube-scheduler-embed-certs-026461
	fe40db18432cc       d291939e99406       18 minutes ago      Running             kube-apiserver              1                   b5a0fc931bd66       kube-apiserver-embed-certs-026461
	7496a20db308c       1611cd07b61d5       19 minutes ago      Exited              busybox                     0                   6c7ba793ea9e7       busybox
	fe3cede4fe480       138784d87c9c5       19 minutes ago      Exited              coredns                     0                   ac7bf281c1496       coredns-66bc5c9577-4wnmn
	e213037c3d557       b1a8c6f707935       19 minutes ago      Exited              kindnet-cni                 0                   35ef3580c85e1       kindnet-9flm7
	c5c57e9e7465f       6fc32d66c1411       19 minutes ago      Exited              kube-proxy                  0                   ea590236c7dff       kube-proxy-xsnrj
	b33eda9a841a9       a1894772a478e       19 minutes ago      Exited              etcd                        0                   3f00bfcd04b87       etcd-embed-certs-026461
	6330ead2ac8c8       996be7e86d9b3       19 minutes ago      Exited              kube-controller-manager     0                   06edb7887c5b7       kube-controller-manager-embed-certs-026461
	d28195f477cc9       a25f5ef9c34c3       19 minutes ago      Exited              kube-scheduler              0                   a9240ad5e2b11       kube-scheduler-embed-certs-026461
	d2738e9189b00       d291939e99406       19 minutes ago      Exited              kube-apiserver              0                   f094fea068fd3       kube-apiserver-embed-certs-026461
	
	
	==> containerd <==
	Sep 08 13:49:40 embed-certs-026461 containerd[576]: time="2025-09-08T13:49:40.286972383Z" level=info msg="received exit event container_id:\"ee109acde2c6e3d241a9c0fa0204bd61e8f658d5b941ff24f18997e916dec5d2\" id:\"ee109acde2c6e3d241a9c0fa0204bd61e8f658d5b941ff24f18997e916dec5d2\" pid:3191 exit_status:255 exited_at:{seconds:1757339380 nanos:284528395}"
	Sep 08 13:49:40 embed-certs-026461 containerd[576]: time="2025-09-08T13:49:40.311954176Z" level=info msg="shim disconnected" id=ee109acde2c6e3d241a9c0fa0204bd61e8f658d5b941ff24f18997e916dec5d2 namespace=k8s.io
	Sep 08 13:49:40 embed-certs-026461 containerd[576]: time="2025-09-08T13:49:40.312163031Z" level=warning msg="cleaning up after shim disconnected" id=ee109acde2c6e3d241a9c0fa0204bd61e8f658d5b941ff24f18997e916dec5d2 namespace=k8s.io
	Sep 08 13:49:40 embed-certs-026461 containerd[576]: time="2025-09-08T13:49:40.312258257Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:49:41 embed-certs-026461 containerd[576]: time="2025-09-08T13:49:41.317709225Z" level=info msg="RemoveContainer for \"19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed\""
	Sep 08 13:49:41 embed-certs-026461 containerd[576]: time="2025-09-08T13:49:41.327127569Z" level=info msg="RemoveContainer for \"19a327be064f320215e334ac68e39aca5b3aaf4f85db2ec05dd48e0b94c091ed\" returns successfully"
	Sep 08 13:54:28 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:28.195011956Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:54:28 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:28.197856049Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:54:28 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:28.343018887Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:54:28 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:28.644986954Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 13:54:28 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:28.645034723Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 08 13:54:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:38.192956351Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 08 13:54:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:38.203906743Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 08 13:54:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:38.205985624Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 08 13:54:38 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:38.205991212Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 08 13:54:49 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:49.194803967Z" level=info msg="CreateContainer within sandbox \"8e8711d46b1172f486ede8dd4c92ae528802192a13c63be3f98ddcd257833589\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Sep 08 13:54:49 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:49.217117959Z" level=info msg="CreateContainer within sandbox \"8e8711d46b1172f486ede8dd4c92ae528802192a13c63be3f98ddcd257833589\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5\""
	Sep 08 13:54:49 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:49.218079219Z" level=info msg="StartContainer for \"c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5\""
	Sep 08 13:54:49 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:49.270949227Z" level=info msg="StartContainer for \"c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5\" returns successfully"
	Sep 08 13:54:49 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:49.276855779Z" level=info msg="received exit event container_id:\"c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5\" id:\"c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5\" pid:3306 exit_status:255 exited_at:{seconds:1757339689 nanos:276451327}"
	Sep 08 13:54:49 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:49.300424323Z" level=info msg="shim disconnected" id=c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5 namespace=k8s.io
	Sep 08 13:54:49 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:49.300606406Z" level=warning msg="cleaning up after shim disconnected" id=c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5 namespace=k8s.io
	Sep 08 13:54:49 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:49.300659204Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:54:50 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:50.115315434Z" level=info msg="RemoveContainer for \"ee109acde2c6e3d241a9c0fa0204bd61e8f658d5b941ff24f18997e916dec5d2\""
	Sep 08 13:54:50 embed-certs-026461 containerd[576]: time="2025-09-08T13:54:50.124866975Z" level=info msg="RemoveContainer for \"ee109acde2c6e3d241a9c0fa0204bd61e8f658d5b941ff24f18997e916dec5d2\" returns successfully"
	
	
	==> coredns [3fc465279954ae02420ccafa28848517fd2c806685cb7505ca34449590f063f3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34998 - 36333 "HINFO IN 1728789598036754601.3188074970806449334. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042540563s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [fe3cede4fe48003419ad4d922a72cd175b10badc10a1731c781055596c825baa] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59382 - 7217 "HINFO IN 323145707175562631.8575395926584091426. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.029866751s
	
	
	==> describe nodes <==
	Name:               embed-certs-026461
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-026461
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=embed-certs-026461
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_37_36_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:37:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-026461
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:57:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:56:26 +0000   Mon, 08 Sep 2025 13:37:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:56:26 +0000   Mon, 08 Sep 2025 13:37:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:56:26 +0000   Mon, 08 Sep 2025 13:37:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:56:26 +0000   Mon, 08 Sep 2025 13:37:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-026461
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 68838359c58240dbbcb101d7b33babff
	  System UUID:                273578e8-238b-44d1-a9ea-2502d8b75f4a
	  Boot ID:                    9f5228b8-b58e-4b72-938a-84f5f7e9d841
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-4wnmn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-embed-certs-026461                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-9flm7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-embed-certs-026461             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-embed-certs-026461    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-xsnrj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-embed-certs-026461             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-wxqrm               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-f6wkz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pfn9p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node embed-certs-026461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-026461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m                kubelet          Node embed-certs-026461 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m                node-controller  Node embed-certs-026461 event: Registered Node embed-certs-026461 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-026461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-026461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-026461 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node embed-certs-026461 event: Registered Node embed-certs-026461 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [20084fdf888674171cdf5ae036c2c159a28fc20c487ff0e04de3384b619ab59c] <==
	{"level":"warn","ts":"2025-09-08T13:38:34.169442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.189264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.225719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.268134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.308217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.342150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.366867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.382087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.398654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.417080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.441427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.457038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.475978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.499696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.534871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.549800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.566995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:38:34.676488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:39:47.316066Z","caller":"traceutil/trace.go:172","msg":"trace[2043197692] transaction","detail":"{read_only:false; response_revision:745; number_of_response:1; }","duration":"107.396113ms","start":"2025-09-08T13:39:47.208656Z","end":"2025-09-08T13:39:47.316052Z","steps":["trace[2043197692] 'process raft request'  (duration: 107.319365ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:48:33.227992Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1029}
	{"level":"info","ts":"2025-09-08T13:48:33.313500Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1029,"took":"85.147636ms","hash":4263095473,"current-db-size-bytes":2805760,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1003520,"current-db-size-in-use":"1.0 MB"}
	{"level":"info","ts":"2025-09-08T13:48:33.313560Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4263095473,"revision":1029,"compact-revision":-1}
	{"level":"info","ts":"2025-09-08T13:53:33.235278Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1285}
	{"level":"info","ts":"2025-09-08T13:53:33.238604Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1285,"took":"3.052899ms","hash":1227059336,"current-db-size-bytes":2805760,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1376256,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-08T13:53:33.238652Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1227059336,"revision":1285,"compact-revision":1029}
	
	
	==> etcd [b33eda9a841a9cd6dabfae54d7049877b124dd1b907cbc696c92b7f9454c0782] <==
	{"level":"warn","ts":"2025-09-08T13:37:29.659069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.688013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.738662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.783090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.814931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.878319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.900402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.944301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:29.979479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.020676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.051872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.084295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.110110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.202804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.208039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.232826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.283239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.329656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.372413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.414731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.474301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.500183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.559769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.581556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:37:30.699316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49152","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:57:22 up 19:39,  0 users,  load average: 1.08, 0.80, 0.91
	Linux embed-certs-026461 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0864e28d4ed8f1c2dfb4f237a05798334cdf825daeaacc02eceb59ce2e52c913] <==
	I0908 13:55:17.807745       1 main.go:301] handling current node
	I0908 13:55:27.808340       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:55:27.808434       1 main.go:301] handling current node
	I0908 13:55:37.811671       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:55:37.811933       1 main.go:301] handling current node
	I0908 13:55:47.807557       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:55:47.807696       1 main.go:301] handling current node
	I0908 13:55:57.814424       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:55:57.814458       1 main.go:301] handling current node
	I0908 13:56:07.815160       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:56:07.815497       1 main.go:301] handling current node
	I0908 13:56:17.808227       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:56:17.808489       1 main.go:301] handling current node
	I0908 13:56:27.810159       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:56:27.810195       1 main.go:301] handling current node
	I0908 13:56:37.810680       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:56:37.811069       1 main.go:301] handling current node
	I0908 13:56:47.808153       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:56:47.808381       1 main.go:301] handling current node
	I0908 13:56:57.814482       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:56:57.814518       1 main.go:301] handling current node
	I0908 13:57:07.816493       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:57:07.816533       1 main.go:301] handling current node
	I0908 13:57:17.818411       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:57:17.818447       1 main.go:301] handling current node
	
	
	==> kindnet [e213037c3d5577450863d222bd09abd0b099c0a413275757e8478e475168e6f3] <==
	I0908 13:37:40.705962       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:37:40.706192       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0908 13:37:40.706313       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:37:40.706330       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:37:40.706396       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:37:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:37:40.906214       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:37:40.906326       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:37:40.909421       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:37:40.909731       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0908 13:37:41.198498       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:37:41.198530       1 metrics.go:72] Registering metrics
	I0908 13:37:41.198590       1 controller.go:711] "Syncing nftables rules"
	I0908 13:37:50.907463       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:37:50.907519       1 main.go:301] handling current node
	I0908 13:38:00.906332       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:38:00.906604       1 main.go:301] handling current node
	I0908 13:38:10.906278       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0908 13:38:10.906409       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2738e9189b007c276ce183af874bcd7acf236cb12e77b9760e42726ff6eefbb] <==
	I0908 13:37:35.113624       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0908 13:37:35.124096       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 13:37:38.706087       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 13:37:39.043440       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0908 13:37:39.595638       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:37:39.607157       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0908 13:38:09.999097       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:56954: use of closed network connection
	I0908 13:38:10.833100       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:38:10.841890       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:38:10.841940       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:38:10.842000       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0908 13:38:10.989569       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.102.92.2"}
	W0908 13:38:11.001469       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:38:11.001519       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0908 13:38:11.029942       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:38:11.029997       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [fe40db18432cc24d4aad79981dcd3eed10152204cfaa86fab6a46703c8226021] <==
	I0908 13:53:38.541308       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:54:28.893377       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:54:36.918113       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:54:36.918209       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:54:36.918239       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:54:36.920111       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:54:36.920149       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:54:36.920160       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:54:45.915798       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:55:39.465106       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:55:58.822761       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:56:36.918874       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:56:36.918965       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:56:36.918976       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:56:36.921041       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:56:36.921083       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:56:36.921096       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:57:00.680832       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:57:22.426239       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4ff6224ab16c1cc607d9f8e1c58ff5fdaff97f854f6a7985708c3d78905fcd65] <==
	I0908 13:51:11.469564       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:51:41.362033       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:51:41.476619       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:52:11.367922       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:52:11.483796       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:52:41.372931       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:52:41.491086       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:53:11.377632       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:53:11.498401       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:53:41.382189       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:53:41.506252       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:54:11.386117       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:54:11.513766       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:54:41.391489       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:54:41.521535       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:55:11.396107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:55:11.529326       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:55:41.401291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:55:41.537117       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:56:11.405538       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:56:11.544373       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:56:41.411883       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:56:41.552409       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:57:11.417109       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:57:11.560644       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [6330ead2ac8c8438ea6f4bb5f9acb7aa56cc82f9b368662b5bb91e80534650db] <==
	I0908 13:37:38.588282       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0908 13:37:38.588299       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 13:37:38.588585       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-026461"
	I0908 13:37:38.596701       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 13:37:38.588650       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 13:37:38.588638       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 13:37:38.588876       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0908 13:37:38.596994       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 13:37:38.597261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:37:38.592017       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 13:37:38.602796       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 13:37:38.604526       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:37:38.604709       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 13:37:38.604792       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 13:37:38.614571       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:37:38.629685       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:37:38.636412       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 13:37:38.637647       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 13:37:38.637654       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 13:37:38.638763       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 13:37:38.638831       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 13:37:38.638888       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 13:37:38.639077       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 13:37:38.641090       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 13:37:38.653387       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [6e197c1878f391fe7b5956a5e92981886600f9b4482ba9a410de004af54407aa] <==
	I0908 13:38:37.603386       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:38:37.692838       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:38:37.793471       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:38:37.793513       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0908 13:38:37.793596       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:38:37.836160       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:38:37.836237       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:38:37.849032       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:38:37.849475       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:38:37.849500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:38:37.856047       1 config.go:200] "Starting service config controller"
	I0908 13:38:37.856067       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:38:37.856125       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:38:37.856132       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:38:37.856146       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:38:37.856150       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:38:37.857285       1 config.go:309] "Starting node config controller"
	I0908 13:38:37.857295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:38:37.857313       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:38:37.956452       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:38:37.956495       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:38:37.956539       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c5c57e9e7465f3b6dcc007639128213d7d29165743b8342185639d44196c0e60] <==
	I0908 13:37:40.457152       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:37:40.576184       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:37:40.676331       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:37:40.676371       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0908 13:37:40.676440       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:37:40.696809       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:37:40.696859       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:37:40.704148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:37:40.704433       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:37:40.704456       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:37:40.707863       1 config.go:200] "Starting service config controller"
	I0908 13:37:40.707885       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:37:40.707920       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:37:40.707929       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:37:40.707950       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:37:40.707957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:37:40.708874       1 config.go:309] "Starting node config controller"
	I0908 13:37:40.708907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:37:40.708914       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:37:40.808836       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:37:40.808872       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:37:40.809036       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d10972d8b632ac1be953d78807b13300d9a41a4eeb13698e89a3e43b29d1dd02] <==
	I0908 13:38:33.152768       1 serving.go:386] Generated self-signed cert in-memory
	I0908 13:38:36.365982       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:38:36.366012       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:38:36.383212       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 13:38:36.383258       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 13:38:36.383298       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:38:36.383305       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:38:36.383318       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:38:36.383324       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:38:36.386416       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:38:36.386461       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:38:36.484228       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:38:36.484291       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:38:36.484235       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kube-scheduler [d28195f477cc9f19233c21d3f187a2ddca9c454ef053874df5af77d0ffc92680] <==
	I0908 13:37:29.633996       1 serving.go:386] Generated self-signed cert in-memory
	I0908 13:37:34.271090       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:37:34.271313       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:37:34.276774       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:37:34.276953       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 13:37:34.277011       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 13:37:34.277083       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:37:34.286502       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:37:34.286687       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:37:34.286774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:37:34.286813       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:37:34.377594       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 13:37:34.387473       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:37:34.387680       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:56:02 embed-certs-026461 kubelet[676]: E0908 13:56:02.192818     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:56:10 embed-certs-026461 kubelet[676]: E0908 13:56:10.194148     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:56:13 embed-certs-026461 kubelet[676]: I0908 13:56:13.191685     676 scope.go:117] "RemoveContainer" containerID="c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5"
	Sep 08 13:56:13 embed-certs-026461 kubelet[676]: E0908 13:56:13.191872     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:56:14 embed-certs-026461 kubelet[676]: E0908 13:56:14.192974     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:56:22 embed-certs-026461 kubelet[676]: E0908 13:56:22.193425     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:56:24 embed-certs-026461 kubelet[676]: I0908 13:56:24.192596     676 scope.go:117] "RemoveContainer" containerID="c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5"
	Sep 08 13:56:24 embed-certs-026461 kubelet[676]: E0908 13:56:24.193403     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:56:25 embed-certs-026461 kubelet[676]: E0908 13:56:25.193433     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:56:34 embed-certs-026461 kubelet[676]: E0908 13:56:34.193103     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:56:36 embed-certs-026461 kubelet[676]: I0908 13:56:36.192564     676 scope.go:117] "RemoveContainer" containerID="c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5"
	Sep 08 13:56:36 embed-certs-026461 kubelet[676]: E0908 13:56:36.193221     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:56:39 embed-certs-026461 kubelet[676]: E0908 13:56:39.192640     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:56:48 embed-certs-026461 kubelet[676]: E0908 13:56:48.193105     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:56:48 embed-certs-026461 kubelet[676]: I0908 13:56:48.193672     676 scope.go:117] "RemoveContainer" containerID="c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5"
	Sep 08 13:56:48 embed-certs-026461 kubelet[676]: E0908 13:56:48.193819     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:56:54 embed-certs-026461 kubelet[676]: E0908 13:56:54.193983     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:57:00 embed-certs-026461 kubelet[676]: I0908 13:57:00.213135     676 scope.go:117] "RemoveContainer" containerID="c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5"
	Sep 08 13:57:00 embed-certs-026461 kubelet[676]: E0908 13:57:00.214026     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:57:00 embed-certs-026461 kubelet[676]: E0908 13:57:00.217387     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:57:05 embed-certs-026461 kubelet[676]: E0908 13:57:05.192734     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	Sep 08 13:57:11 embed-certs-026461 kubelet[676]: I0908 13:57:11.192832     676 scope.go:117] "RemoveContainer" containerID="c481054b410b8e85d48442d65621e6da1ce3e6838a1e504deb034b7b18e5a4d5"
	Sep 08 13:57:11 embed-certs-026461 kubelet[676]: E0908 13:57:11.192984     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f6wkz_kubernetes-dashboard(1199b5db-3a0d-4e19-97c2-d475a0eb4f26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f6wkz" podUID="1199b5db-3a0d-4e19-97c2-d475a0eb4f26"
	Sep 08 13:57:11 embed-certs-026461 kubelet[676]: E0908 13:57:11.193739     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-wxqrm" podUID="abebbfc6-c3ac-4cc3-b788-e4dce09f134f"
	Sep 08 13:57:20 embed-certs-026461 kubelet[676]: E0908 13:57:20.195668     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfn9p" podUID="39d2d704-928d-4791-9c0d-9ec08f500ce1"
	
	
	==> storage-provisioner [65b20a4e85ccd3a3ac2b3e9c988a6052142c1cd0cbbc417291d379ca6140316e] <==
	W0908 13:56:57.695263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:59.697833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:59.702301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:01.705209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:01.709679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:03.713451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:03.720149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:05.723782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:05.728161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:07.731594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:07.735885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:09.738627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:09.743500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:11.747628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:11.754792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:13.758143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:13.762393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:15.765458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:15.769907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:17.773444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:17.778105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:19.781088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:19.785396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:21.789441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:57:21.796947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dac710a912bb41ce6bc10c6eaf19a9182dad19be39b8c19bc10afb37fb140110] <==
	I0908 13:38:37.751605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 13:39:07.764007       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-026461 -n embed-certs-026461
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-026461 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-wxqrm kubernetes-dashboard-855c9754f9-pfn9p
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-026461 describe pod metrics-server-746fcd58dc-wxqrm kubernetes-dashboard-855c9754f9-pfn9p
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-026461 describe pod metrics-server-746fcd58dc-wxqrm kubernetes-dashboard-855c9754f9-pfn9p: exit status 1 (79.931417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-wxqrm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-pfn9p" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-026461 describe pod metrics-server-746fcd58dc-wxqrm kubernetes-dashboard-855c9754f9-pfn9p: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (544.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kqtp5" [1d2664d7-c720-46c0-8112-72e5cac1495d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 13:51:34.195377 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:52:37.127953 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:53:47.956177 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:00.193404 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:55:11.026868 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:55:13.323388 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:56:34.195340 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:56:36.385331 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-08 14:00:33.0860045 +0000 UTC m=+5263.421539589
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 describe po kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-153420 describe po kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-kqtp5
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-153420/192.168.85.2
Start Time:       Mon, 08 Sep 2025 13:41:56 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ts5jn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-ts5jn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason       Age                   From               Message
----     ------       ----                  ----               -------
Normal   Scheduled    18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5 to default-k8s-diff-port-153420
Warning  FailedMount  18m                   kubelet            MountVolume.SetUp failed for volume "kube-api-access-ts5jn" : configmap "kube-root-ca.crt" not found
Normal   Pulling      15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed       15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed       15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff      3m30s (x65 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed       3m30s (x65 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 logs kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-153420 logs kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard: exit status 1 (144.665413ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-kqtp5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-153420 logs kubernetes-dashboard-855c9754f9-kqtp5 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-153420
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-153420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502",
	        "Created": "2025-09-08T13:39:48.48691874Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2981961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:41:38.454695777Z",
	            "FinishedAt": "2025-09-08T13:41:37.638855437Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502/hosts",
	        "LogPath": "/var/lib/docker/containers/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502/c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502-json.log",
	        "Name": "/default-k8s-diff-port-153420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-153420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-153420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c6555f7901410e5555189fd688ba984a93cd49bbd9150f847663a1057d92b502",
	                "LowerDir": "/var/lib/docker/overlay2/e0ab1c43c73c9799fb57efc367fac133b1a1946f0102d5b1b10ed818739a534a-init/diff:/var/lib/docker/overlay2/665ff8bb3d07b8342629df037737f3667c7c59d9d1f85930dc3dfdf138460626/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e0ab1c43c73c9799fb57efc367fac133b1a1946f0102d5b1b10ed818739a534a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e0ab1c43c73c9799fb57efc367fac133b1a1946f0102d5b1b10ed818739a534a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e0ab1c43c73c9799fb57efc367fac133b1a1946f0102d5b1b10ed818739a534a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-153420",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-153420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-153420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-153420",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-153420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bad7a24edabb398db9feb6196f95c5e2623c88b41a055d98160695e34b0a693a",
	            "SandboxKey": "/var/run/docker/netns/bad7a24edabb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36753"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36754"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36757"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36755"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36756"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-153420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:2b:b2:47:fd:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61a8f9ac4016b0639d2c8f94aad29d0c91889227ea926c5b1872fd9bf7d32ad5",
	                    "EndpointID": "f59a51c2a799e7170ae960c5c77223af143d9e8634299ffe7196a4e0629d32ea",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-153420",
	                        "c6555f790141"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-153420 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-153420 logs -n 25: (2.596967523s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p newest-cni-194228                                                                                                          │ newest-cni-194228 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ start   │ -p auto-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 pgrep -a kubelet                                                                                               │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo cat /etc/nsswitch.conf                                                                                    │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo cat /etc/hosts                                                                                            │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo cat /etc/resolv.conf                                                                                      │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo crictl pods                                                                                               │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo crictl ps --all                                                                                           │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                    │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo ip a s                                                                                                    │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo ip r s                                                                                                    │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo iptables-save                                                                                             │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo iptables -t nat -L -n -v                                                                                  │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo systemctl status kubelet --all --full --no-pager                                                          │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo systemctl cat kubelet --no-pager                                                                          │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo journalctl -xeu kubelet --all --full --no-pager                                                           │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo cat /etc/kubernetes/kubelet.conf                                                                          │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo cat /var/lib/kubelet/config.yaml                                                                          │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo systemctl status docker --all --full --no-pager                                                           │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │                     │
	│ ssh     │ -p auto-207352 sudo systemctl cat docker --no-pager                                                                           │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo cat /etc/docker/daemon.json                                                                               │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │                     │
	│ ssh     │ -p auto-207352 sudo docker system info                                                                                        │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │                     │
	│ ssh     │ -p auto-207352 sudo systemctl status cri-docker --all --full --no-pager                                                       │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │                     │
	│ ssh     │ -p auto-207352 sudo systemctl cat cri-docker --no-pager                                                                       │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ ssh     │ -p auto-207352 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                  │ auto-207352       │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:58:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:58:29.645864 2994706 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:58:29.646032 2994706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:58:29.646063 2994706 out.go:374] Setting ErrFile to fd 2...
	I0908 13:58:29.646081 2994706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:58:29.646396 2994706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:58:29.646846 2994706 out.go:368] Setting JSON to false
	I0908 13:58:29.647883 2994706 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70862,"bootTime":1757269048,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 13:58:29.647982 2994706 start.go:140] virtualization:  
	I0908 13:58:29.651812 2994706 out.go:179] * [auto-207352] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:58:29.655920 2994706 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:58:29.656101 2994706 notify.go:220] Checking for updates...
	I0908 13:58:29.662504 2994706 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:58:29.665630 2994706 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:58:29.668710 2994706 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 13:58:29.671718 2994706 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:58:29.674701 2994706 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:58:29.678165 2994706 config.go:182] Loaded profile config "default-k8s-diff-port-153420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:58:29.678267 2994706 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:58:29.726049 2994706 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:58:29.726176 2994706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:58:29.782586 2994706 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:58:29.772649785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:58:29.782692 2994706 docker.go:318] overlay module found
	I0908 13:58:29.785813 2994706 out.go:179] * Using the docker driver based on user configuration
	I0908 13:58:29.788698 2994706 start.go:304] selected driver: docker
	I0908 13:58:29.788719 2994706 start.go:918] validating driver "docker" against <nil>
	I0908 13:58:29.788734 2994706 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:58:29.789486 2994706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:58:29.845714 2994706 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:58:29.8369449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:58:29.845869 2994706 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:58:29.846130 2994706 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:58:29.849041 2994706 out.go:179] * Using Docker driver with root privileges
	I0908 13:58:29.851874 2994706 cni.go:84] Creating CNI manager for ""
	I0908 13:58:29.851945 2994706 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:58:29.851958 2994706 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 13:58:29.852042 2994706 start.go:348] cluster config:
	{Name:auto-207352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-207352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:58:29.855195 2994706 out.go:179] * Starting "auto-207352" primary control-plane node in "auto-207352" cluster
	I0908 13:58:29.858009 2994706 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:58:29.860957 2994706 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:58:29.863821 2994706 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:58:29.863871 2994706 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0908 13:58:29.863885 2994706 cache.go:58] Caching tarball of preloaded images
	I0908 13:58:29.863910 2994706 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:58:29.864042 2994706 preload.go:172] Found /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 13:58:29.864054 2994706 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0908 13:58:29.864208 2994706 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/config.json ...
	I0908 13:58:29.864239 2994706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/config.json: {Name:mked66c49fa50696148f1c401e3efd98d4a75d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:58:29.884152 2994706 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:58:29.884177 2994706 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:58:29.884202 2994706 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:58:29.884226 2994706 start.go:360] acquireMachinesLock for auto-207352: {Name:mk88394dc0ebb1ce9f3bd596c2a0fe69cff7bbc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:58:29.884378 2994706 start.go:364] duration metric: took 132.517µs to acquireMachinesLock for "auto-207352"
	I0908 13:58:29.884408 2994706 start.go:93] Provisioning new machine with config: &{Name:auto-207352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-207352 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 13:58:29.884475 2994706 start.go:125] createHost starting for "" (driver="docker")
	I0908 13:58:29.889710 2994706 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0908 13:58:29.889937 2994706 start.go:159] libmachine.API.Create for "auto-207352" (driver="docker")
	I0908 13:58:29.889975 2994706 client.go:168] LocalClient.Create starting
	I0908 13:58:29.890049 2994706 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem
	I0908 13:58:29.890085 2994706 main.go:141] libmachine: Decoding PEM data...
	I0908 13:58:29.890101 2994706 main.go:141] libmachine: Parsing certificate...
	I0908 13:58:29.890166 2994706 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem
	I0908 13:58:29.890188 2994706 main.go:141] libmachine: Decoding PEM data...
	I0908 13:58:29.890201 2994706 main.go:141] libmachine: Parsing certificate...
	I0908 13:58:29.890619 2994706 cli_runner.go:164] Run: docker network inspect auto-207352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 13:58:29.906506 2994706 cli_runner.go:211] docker network inspect auto-207352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 13:58:29.906594 2994706 network_create.go:284] running [docker network inspect auto-207352] to gather additional debugging logs...
	I0908 13:58:29.906614 2994706 cli_runner.go:164] Run: docker network inspect auto-207352
	W0908 13:58:29.921762 2994706 cli_runner.go:211] docker network inspect auto-207352 returned with exit code 1
	I0908 13:58:29.921794 2994706 network_create.go:287] error running [docker network inspect auto-207352]: docker network inspect auto-207352: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-207352 not found
	I0908 13:58:29.921812 2994706 network_create.go:289] output of [docker network inspect auto-207352]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-207352 not found
	
	** /stderr **
	I0908 13:58:29.921905 2994706 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:58:29.938039 2994706 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e8e4d680522a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e6:fc:21:01:0d:7c} reservation:<nil>}
	I0908 13:58:29.938329 2994706 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f4041ba326ec IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:98:55:eb:1e:7f} reservation:<nil>}
	I0908 13:58:29.938738 2994706 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-24188c4f7034 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:e3:11:f1:6e:19} reservation:<nil>}
	I0908 13:58:29.939244 2994706 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a27090}
	I0908 13:58:29.939269 2994706 network_create.go:124] attempt to create docker network auto-207352 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0908 13:58:29.939324 2994706 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-207352 auto-207352
	I0908 13:58:30.007655 2994706 network_create.go:108] docker network auto-207352 192.168.76.0/24 created
	I0908 13:58:30.007709 2994706 kic.go:121] calculated static IP "192.168.76.2" for the "auto-207352" container
	I0908 13:58:30.007799 2994706 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 13:58:30.039333 2994706 cli_runner.go:164] Run: docker volume create auto-207352 --label name.minikube.sigs.k8s.io=auto-207352 --label created_by.minikube.sigs.k8s.io=true
	I0908 13:58:30.060415 2994706 oci.go:103] Successfully created a docker volume auto-207352
	I0908 13:58:30.060502 2994706 cli_runner.go:164] Run: docker run --rm --name auto-207352-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-207352 --entrypoint /usr/bin/test -v auto-207352:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 13:58:30.577815 2994706 oci.go:107] Successfully prepared a docker volume auto-207352
	I0908 13:58:30.577867 2994706 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:58:30.577887 2994706 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 13:58:30.577959 2994706 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-207352:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 13:58:34.766588 2994706 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-207352:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.188588272s)
	I0908 13:58:34.766620 2994706 kic.go:203] duration metric: took 4.188729815s to extract preloaded images to volume ...
	W0908 13:58:34.766750 2994706 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 13:58:34.766860 2994706 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 13:58:34.823489 2994706 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-207352 --name auto-207352 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-207352 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-207352 --network auto-207352 --ip 192.168.76.2 --volume auto-207352:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 13:58:35.129108 2994706 cli_runner.go:164] Run: docker container inspect auto-207352 --format={{.State.Running}}
	I0908 13:58:35.150204 2994706 cli_runner.go:164] Run: docker container inspect auto-207352 --format={{.State.Status}}
	I0908 13:58:35.177069 2994706 cli_runner.go:164] Run: docker exec auto-207352 stat /var/lib/dpkg/alternatives/iptables
	I0908 13:58:35.226623 2994706 oci.go:144] the created container "auto-207352" has a running status.
	I0908 13:58:35.226666 2994706 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/auto-207352/id_rsa...
	I0908 13:58:35.567014 2994706 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/auto-207352/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 13:58:35.600827 2994706 cli_runner.go:164] Run: docker container inspect auto-207352 --format={{.State.Status}}
	I0908 13:58:35.633380 2994706 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 13:58:35.633411 2994706 kic_runner.go:114] Args: [docker exec --privileged auto-207352 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 13:58:35.704692 2994706 cli_runner.go:164] Run: docker container inspect auto-207352 --format={{.State.Status}}
	I0908 13:58:35.736460 2994706 machine.go:93] provisionDockerMachine start ...
	I0908 13:58:35.736567 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:58:35.772787 2994706 main.go:141] libmachine: Using SSH client type: native
	I0908 13:58:35.773172 2994706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36768 <nil> <nil>}
	I0908 13:58:35.773210 2994706 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:58:35.774122 2994706 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39350->127.0.0.1:36768: read: connection reset by peer
	I0908 13:58:38.902177 2994706 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-207352
	
	I0908 13:58:38.902199 2994706 ubuntu.go:182] provisioning hostname "auto-207352"
	I0908 13:58:38.902268 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:58:38.920911 2994706 main.go:141] libmachine: Using SSH client type: native
	I0908 13:58:38.921249 2994706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36768 <nil> <nil>}
	I0908 13:58:38.921265 2994706 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-207352 && echo "auto-207352" | sudo tee /etc/hostname
	I0908 13:58:39.068047 2994706 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-207352
	
	I0908 13:58:39.068122 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:58:39.086836 2994706 main.go:141] libmachine: Using SSH client type: native
	I0908 13:58:39.087191 2994706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36768 <nil> <nil>}
	I0908 13:58:39.087212 2994706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-207352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-207352/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-207352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:58:39.210275 2994706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:58:39.210304 2994706 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
	I0908 13:58:39.210321 2994706 ubuntu.go:190] setting up certificates
	I0908 13:58:39.210374 2994706 provision.go:84] configureAuth start
	I0908 13:58:39.210441 2994706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-207352
	I0908 13:58:39.228005 2994706 provision.go:143] copyHostCerts
	I0908 13:58:39.228076 2994706 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
	I0908 13:58:39.228090 2994706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
	I0908 13:58:39.228168 2994706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
	I0908 13:58:39.228270 2994706 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
	I0908 13:58:39.228280 2994706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
	I0908 13:58:39.228308 2994706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
	I0908 13:58:39.228380 2994706 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
	I0908 13:58:39.228390 2994706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
	I0908 13:58:39.228416 2994706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
	I0908 13:58:39.228502 2994706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.auto-207352 san=[127.0.0.1 192.168.76.2 auto-207352 localhost minikube]
	I0908 13:58:39.875053 2994706 provision.go:177] copyRemoteCerts
	I0908 13:58:39.875122 2994706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:58:39.875163 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:58:39.899431 2994706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36768 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/auto-207352/id_rsa Username:docker}
	I0908 13:58:40.000274 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 13:58:40.072819 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 13:58:40.100205 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0908 13:58:40.128707 2994706 provision.go:87] duration metric: took 918.296142ms to configureAuth
	I0908 13:58:40.128739 2994706 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:58:40.128954 2994706 config.go:182] Loaded profile config "auto-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:58:40.128969 2994706 machine.go:96] duration metric: took 4.392485095s to provisionDockerMachine
	I0908 13:58:40.128977 2994706 client.go:171] duration metric: took 10.238991056s to LocalClient.Create
	I0908 13:58:40.129003 2994706 start.go:167] duration metric: took 10.239067846s to libmachine.API.Create "auto-207352"
	I0908 13:58:40.129012 2994706 start.go:293] postStartSetup for "auto-207352" (driver="docker")
	I0908 13:58:40.129027 2994706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:58:40.129085 2994706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:58:40.129131 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:58:40.149479 2994706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36768 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/auto-207352/id_rsa Username:docker}
	I0908 13:58:40.243872 2994706 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:58:40.247288 2994706 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:58:40.247335 2994706 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:58:40.247363 2994706 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:58:40.247378 2994706 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:58:40.247404 2994706 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
	I0908 13:58:40.247474 2994706 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
	I0908 13:58:40.247558 2994706 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
	I0908 13:58:40.247673 2994706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 13:58:40.256484 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:58:40.281263 2994706 start.go:296] duration metric: took 152.231759ms for postStartSetup
	I0908 13:58:40.281684 2994706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-207352
	I0908 13:58:40.298764 2994706 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/config.json ...
	I0908 13:58:40.299054 2994706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:58:40.299105 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:58:40.315387 2994706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36768 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/auto-207352/id_rsa Username:docker}
	I0908 13:58:40.407213 2994706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:58:40.412010 2994706 start.go:128] duration metric: took 10.527518179s to createHost
	I0908 13:58:40.412042 2994706 start.go:83] releasing machines lock for "auto-207352", held for 10.52765137s
	I0908 13:58:40.412117 2994706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-207352
	I0908 13:58:40.428820 2994706 ssh_runner.go:195] Run: cat /version.json
	I0908 13:58:40.428881 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:58:40.429132 2994706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:58:40.429205 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:58:40.450467 2994706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36768 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/auto-207352/id_rsa Username:docker}
	I0908 13:58:40.458534 2994706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36768 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/auto-207352/id_rsa Username:docker}
	I0908 13:58:40.537909 2994706 ssh_runner.go:195] Run: systemctl --version
	I0908 13:58:40.691675 2994706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:58:40.695917 2994706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 13:58:40.723854 2994706 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:58:40.723995 2994706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:58:40.755809 2994706 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 13:58:40.755888 2994706 start.go:495] detecting cgroup driver to use...
	I0908 13:58:40.755933 2994706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:58:40.756011 2994706 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 13:58:40.768811 2994706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 13:58:40.780659 2994706 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:58:40.780744 2994706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:58:40.795087 2994706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:58:40.810198 2994706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:58:40.900849 2994706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:58:41.005773 2994706 docker.go:234] disabling docker service ...
	I0908 13:58:41.005911 2994706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:58:41.027823 2994706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:58:41.040136 2994706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:58:41.133378 2994706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:58:41.230374 2994706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:58:41.242747 2994706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:58:41.259222 2994706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 13:58:41.269504 2994706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 13:58:41.280080 2994706 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 13:58:41.280183 2994706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 13:58:41.290249 2994706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:58:41.299943 2994706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 13:58:41.310008 2994706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 13:58:41.319903 2994706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:58:41.328950 2994706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 13:58:41.339838 2994706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 13:58:41.349868 2994706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 13:58:41.360120 2994706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:58:41.368688 2994706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:58:41.377504 2994706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:58:41.461437 2994706 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 13:58:41.595215 2994706 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 13:58:41.595339 2994706 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 13:58:41.598924 2994706 start.go:563] Will wait 60s for crictl version
	I0908 13:58:41.599034 2994706 ssh_runner.go:195] Run: which crictl
	I0908 13:58:41.602429 2994706 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:58:41.643733 2994706 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 13:58:41.643854 2994706 ssh_runner.go:195] Run: containerd --version
	I0908 13:58:41.667125 2994706 ssh_runner.go:195] Run: containerd --version
	I0908 13:58:41.694593 2994706 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 13:58:41.697493 2994706 cli_runner.go:164] Run: docker network inspect auto-207352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:58:41.712880 2994706 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0908 13:58:41.716961 2994706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:58:41.728955 2994706 kubeadm.go:875] updating cluster {Name:auto-207352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-207352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:58:41.729097 2994706 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 13:58:41.729190 2994706 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:58:41.773065 2994706 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:58:41.773090 2994706 containerd.go:534] Images already preloaded, skipping extraction
	I0908 13:58:41.773161 2994706 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:58:41.809046 2994706 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 13:58:41.809112 2994706 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:58:41.809135 2994706 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
	I0908 13:58:41.809238 2994706 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-207352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:auto-207352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:58:41.809312 2994706 ssh_runner.go:195] Run: sudo crictl info
	I0908 13:58:41.846678 2994706 cni.go:84] Creating CNI manager for ""
	I0908 13:58:41.846704 2994706 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:58:41.846713 2994706 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:58:41.846735 2994706 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-207352 NodeName:auto-207352 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:58:41.846867 2994706 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-207352"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:58:41.846947 2994706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:58:41.855997 2994706 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:58:41.856065 2994706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:58:41.864987 2994706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0908 13:58:41.883095 2994706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:58:41.901242 2994706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0908 13:58:41.919467 2994706 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:58:41.923011 2994706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:58:41.934123 2994706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:58:42.033388 2994706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:58:42.050304 2994706 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352 for IP: 192.168.76.2
	I0908 13:58:42.050329 2994706 certs.go:194] generating shared ca certs ...
	I0908 13:58:42.050401 2994706 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:58:42.050545 2994706 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
	I0908 13:58:42.050596 2994706 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
	I0908 13:58:42.050609 2994706 certs.go:256] generating profile certs ...
	I0908 13:58:42.050666 2994706 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.key
	I0908 13:58:42.050722 2994706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt with IP's: []
	I0908 13:58:42.564463 2994706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt ...
	I0908 13:58:42.564497 2994706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: {Name:mk32a6555643ad2e3862f6db5e94d2b246a2d70c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:58:42.564697 2994706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.key ...
	I0908 13:58:42.564710 2994706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.key: {Name:mk3ab928e5404fdc060b50b69a659f6aacddeecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:58:42.564831 2994706 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.key.c496038f
	I0908 13:58:42.564850 2994706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.crt.c496038f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0908 13:58:43.401704 2994706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.crt.c496038f ...
	I0908 13:58:43.401734 2994706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.crt.c496038f: {Name:mkdd2a25bf62582e21f01a9b8c6976ccdc116ce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:58:43.401916 2994706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.key.c496038f ...
	I0908 13:58:43.401929 2994706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.key.c496038f: {Name:mk9f5859fac24fb0c6a3c1f414c8783b3de445ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:58:43.402012 2994706 certs.go:381] copying /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.crt.c496038f -> /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.crt
	I0908 13:58:43.402100 2994706 certs.go:385] copying /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.key.c496038f -> /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.key
	I0908 13:58:43.402168 2994706 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/proxy-client.key
	I0908 13:58:43.402183 2994706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/proxy-client.crt with IP's: []
	I0908 13:58:44.553370 2994706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/proxy-client.crt ...
	I0908 13:58:44.553403 2994706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/proxy-client.crt: {Name:mk166e96141b00dcac50679055cb43e5ff80b41c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:58:44.553591 2994706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/proxy-client.key ...
	I0908 13:58:44.553604 2994706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/proxy-client.key: {Name:mk4834a033cadb5234cccf3ebaa11f375d7e4021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:58:44.553794 2994706 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
	W0908 13:58:44.553847 2994706 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
	I0908 13:58:44.553861 2994706 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 13:58:44.553887 2994706 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
	I0908 13:58:44.553915 2994706 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:58:44.553941 2994706 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
	I0908 13:58:44.553986 2994706 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 13:58:44.554592 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:58:44.580934 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 13:58:44.607295 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:58:44.634898 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:58:44.659967 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0908 13:58:44.684727 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 13:58:44.709372 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:58:44.734452 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 13:58:44.763843 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
	I0908 13:58:44.789329 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
	I0908 13:58:44.816444 2994706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:58:44.841451 2994706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:58:44.859952 2994706 ssh_runner.go:195] Run: openssl version
	I0908 13:58:44.865502 2994706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
	I0908 13:58:44.875339 2994706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
	I0908 13:58:44.879140 2994706 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/2751114.pem
	I0908 13:58:44.879217 2994706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
	I0908 13:58:44.886326 2994706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
	I0908 13:58:44.895992 2994706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
	I0908 13:58:44.905806 2994706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
	I0908 13:58:44.909216 2994706 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/27511142.pem
	I0908 13:58:44.909290 2994706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
	I0908 13:58:44.916780 2994706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:58:44.926540 2994706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:58:44.936001 2994706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:58:44.939601 2994706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:58:44.939661 2994706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:58:44.946785 2994706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:58:44.957332 2994706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:58:44.961432 2994706 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 13:58:44.961487 2994706 kubeadm.go:392] StartCluster: {Name:auto-207352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-207352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:58:44.961560 2994706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 13:58:44.961637 2994706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:58:45.017601 2994706 cri.go:89] found id: ""
	I0908 13:58:45.017713 2994706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:58:45.049170 2994706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 13:58:45.085166 2994706 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 13:58:45.085274 2994706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 13:58:45.133376 2994706 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 13:58:45.133398 2994706 kubeadm.go:157] found existing configuration files:
	
	I0908 13:58:45.133494 2994706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 13:58:45.166470 2994706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 13:58:45.166550 2994706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 13:58:45.205045 2994706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 13:58:45.227824 2994706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 13:58:45.227923 2994706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 13:58:45.240673 2994706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 13:58:45.252764 2994706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 13:58:45.252838 2994706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 13:58:45.265821 2994706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 13:58:45.279524 2994706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 13:58:45.279602 2994706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 13:58:45.295236 2994706 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 13:58:45.378727 2994706 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 13:58:45.380000 2994706 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 13:58:45.414750 2994706 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 13:58:45.414987 2994706 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0908 13:58:45.415094 2994706 kubeadm.go:310] OS: Linux
	I0908 13:58:45.415208 2994706 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 13:58:45.415274 2994706 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 13:58:45.415398 2994706 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 13:58:45.415507 2994706 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 13:58:45.415636 2994706 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 13:58:45.415685 2994706 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 13:58:45.415741 2994706 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 13:58:45.415804 2994706 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 13:58:45.415858 2994706 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 13:58:45.487880 2994706 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 13:58:45.488044 2994706 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 13:58:45.488167 2994706 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 13:58:45.497088 2994706 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 13:58:45.502908 2994706 out.go:252]   - Generating certificates and keys ...
	I0908 13:58:45.503101 2994706 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 13:58:45.503207 2994706 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 13:58:45.745444 2994706 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 13:58:46.469521 2994706 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 13:58:46.667383 2994706 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 13:58:46.862730 2994706 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 13:58:47.408493 2994706 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 13:58:47.408803 2994706 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-207352 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 13:58:48.266562 2994706 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 13:58:48.266949 2994706 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-207352 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 13:58:48.424224 2994706 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 13:58:49.141603 2994706 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 13:58:49.238054 2994706 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 13:58:49.238393 2994706 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 13:58:49.853181 2994706 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 13:58:49.957131 2994706 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 13:58:51.311082 2994706 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 13:58:51.464120 2994706 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 13:58:52.869181 2994706 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 13:58:52.869833 2994706 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 13:58:52.872438 2994706 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 13:58:52.875742 2994706 out.go:252]   - Booting up control plane ...
	I0908 13:58:52.875909 2994706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 13:58:52.876510 2994706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 13:58:52.876586 2994706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 13:58:52.889471 2994706 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 13:58:52.889594 2994706 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 13:58:52.897083 2994706 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 13:58:52.897185 2994706 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 13:58:52.897226 2994706 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 13:58:52.993938 2994706 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 13:58:52.994070 2994706 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 13:58:53.994689 2994706 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000684726s
	I0908 13:58:53.998097 2994706 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 13:58:53.998198 2994706 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0908 13:58:53.998299 2994706 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 13:58:53.998398 2994706 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 13:58:58.598760 2994706 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.599915729s
	I0908 13:59:00.773360 2994706 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 6.77520014s
	I0908 13:59:01.501714 2994706 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 7.503369164s
	I0908 13:59:01.524673 2994706 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 13:59:01.539773 2994706 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 13:59:01.569164 2994706 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 13:59:01.569380 2994706 kubeadm.go:310] [mark-control-plane] Marking the node auto-207352 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 13:59:01.591012 2994706 kubeadm.go:310] [bootstrap-token] Using token: q9s82g.olqqjiy0f87fjs3r
	I0908 13:59:01.594153 2994706 out.go:252]   - Configuring RBAC rules ...
	I0908 13:59:01.594323 2994706 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 13:59:01.604830 2994706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 13:59:01.616813 2994706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 13:59:01.622434 2994706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 13:59:01.631134 2994706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 13:59:01.637044 2994706 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 13:59:01.909659 2994706 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 13:59:02.338903 2994706 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 13:59:02.908941 2994706 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 13:59:02.910191 2994706 kubeadm.go:310] 
	I0908 13:59:02.910270 2994706 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 13:59:02.910282 2994706 kubeadm.go:310] 
	I0908 13:59:02.910416 2994706 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 13:59:02.910431 2994706 kubeadm.go:310] 
	I0908 13:59:02.910469 2994706 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 13:59:02.910531 2994706 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 13:59:02.910582 2994706 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 13:59:02.910586 2994706 kubeadm.go:310] 
	I0908 13:59:02.910650 2994706 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 13:59:02.910656 2994706 kubeadm.go:310] 
	I0908 13:59:02.910704 2994706 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 13:59:02.910709 2994706 kubeadm.go:310] 
	I0908 13:59:02.910762 2994706 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 13:59:02.910838 2994706 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 13:59:02.910907 2994706 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 13:59:02.910912 2994706 kubeadm.go:310] 
	I0908 13:59:02.910997 2994706 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 13:59:02.911075 2994706 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 13:59:02.911080 2994706 kubeadm.go:310] 
	I0908 13:59:02.911165 2994706 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q9s82g.olqqjiy0f87fjs3r \
	I0908 13:59:02.911270 2994706 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea53cc69e0a3bf1a14d944ab7c19a16da3435f4f2a06e216682a58ad77f6bb3 \
	I0908 13:59:02.911291 2994706 kubeadm.go:310] 	--control-plane 
	I0908 13:59:02.911296 2994706 kubeadm.go:310] 
	I0908 13:59:02.911381 2994706 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 13:59:02.911386 2994706 kubeadm.go:310] 
	I0908 13:59:02.911470 2994706 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q9s82g.olqqjiy0f87fjs3r \
	I0908 13:59:02.911602 2994706 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea53cc69e0a3bf1a14d944ab7c19a16da3435f4f2a06e216682a58ad77f6bb3 
	I0908 13:59:02.915047 2994706 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 13:59:02.915287 2994706 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0908 13:59:02.915416 2994706 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 13:59:02.915447 2994706 cni.go:84] Creating CNI manager for ""
	I0908 13:59:02.915455 2994706 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:59:02.918543 2994706 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 13:59:02.921564 2994706 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 13:59:02.925950 2994706 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 13:59:02.925973 2994706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 13:59:02.951756 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 13:59:03.245681 2994706 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 13:59:03.245822 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:03.245912 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-207352 minikube.k8s.io/updated_at=2025_09_08T13_59_03_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba minikube.k8s.io/name=auto-207352 minikube.k8s.io/primary=true
	I0908 13:59:03.527180 2994706 ops.go:34] apiserver oom_adj: -16
	I0908 13:59:03.539134 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:04.039425 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:04.539851 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:05.040064 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:05.539933 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:06.041344 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:06.539255 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:07.039250 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:07.539458 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:08.039576 2994706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:59:08.180855 2994706 kubeadm.go:1105] duration metric: took 4.935077315s to wait for elevateKubeSystemPrivileges
	I0908 13:59:08.180884 2994706 kubeadm.go:394] duration metric: took 23.219400431s to StartCluster
	I0908 13:59:08.180902 2994706 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:59:08.180962 2994706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:59:08.182012 2994706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:59:08.182222 2994706 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 13:59:08.182315 2994706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 13:59:08.182606 2994706 config.go:182] Loaded profile config "auto-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:59:08.182648 2994706 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 13:59:08.182705 2994706 addons.go:69] Setting storage-provisioner=true in profile "auto-207352"
	I0908 13:59:08.182720 2994706 addons.go:238] Setting addon storage-provisioner=true in "auto-207352"
	I0908 13:59:08.182741 2994706 host.go:66] Checking if "auto-207352" exists ...
	I0908 13:59:08.183027 2994706 addons.go:69] Setting default-storageclass=true in profile "auto-207352"
	I0908 13:59:08.183041 2994706 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-207352"
	I0908 13:59:08.183298 2994706 cli_runner.go:164] Run: docker container inspect auto-207352 --format={{.State.Status}}
	I0908 13:59:08.183675 2994706 cli_runner.go:164] Run: docker container inspect auto-207352 --format={{.State.Status}}
	I0908 13:59:08.186997 2994706 out.go:179] * Verifying Kubernetes components...
	I0908 13:59:08.197679 2994706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:59:08.213349 2994706 addons.go:238] Setting addon default-storageclass=true in "auto-207352"
	I0908 13:59:08.213388 2994706 host.go:66] Checking if "auto-207352" exists ...
	I0908 13:59:08.213820 2994706 cli_runner.go:164] Run: docker container inspect auto-207352 --format={{.State.Status}}
	I0908 13:59:08.235225 2994706 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:59:08.241553 2994706 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:59:08.241578 2994706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:59:08.241649 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:59:08.253095 2994706 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:59:08.253114 2994706 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:59:08.253174 2994706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-207352
	I0908 13:59:08.280615 2994706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36768 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/auto-207352/id_rsa Username:docker}
	I0908 13:59:08.291198 2994706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36768 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/auto-207352/id_rsa Username:docker}
	I0908 13:59:08.501740 2994706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:59:08.510021 2994706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 13:59:08.510181 2994706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:59:08.533940 2994706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:59:09.173256 2994706 node_ready.go:35] waiting up to 15m0s for node "auto-207352" to be "Ready" ...
	I0908 13:59:09.173487 2994706 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0908 13:59:09.195012 2994706 node_ready.go:49] node "auto-207352" is "Ready"
	I0908 13:59:09.195043 2994706 node_ready.go:38] duration metric: took 21.760867ms for node "auto-207352" to be "Ready" ...
	I0908 13:59:09.195057 2994706 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:59:09.195155 2994706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:59:09.523803 2994706 api_server.go:72] duration metric: took 1.341541808s to wait for apiserver process to appear ...
	I0908 13:59:09.523866 2994706 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:59:09.523896 2994706 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 13:59:09.526963 2994706 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0908 13:59:09.529958 2994706 addons.go:514] duration metric: took 1.347285479s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0908 13:59:09.537464 2994706 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0908 13:59:09.538802 2994706 api_server.go:141] control plane version: v1.34.0
	I0908 13:59:09.538867 2994706 api_server.go:131] duration metric: took 14.981682ms to wait for apiserver health ...
	I0908 13:59:09.538889 2994706 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:59:09.544179 2994706 system_pods.go:59] 9 kube-system pods found
	I0908 13:59:09.544259 2994706 system_pods.go:61] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:09.544300 2994706 system_pods.go:61] "coredns-66bc5c9577-pbsjb" [98ea4a60-ebdb-4967-903c-0cc645bcefb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:09.544335 2994706 system_pods.go:61] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:09.544361 2994706 system_pods.go:61] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:09.544383 2994706 system_pods.go:61] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:59:09.544401 2994706 system_pods.go:61] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:09.544419 2994706 system_pods.go:61] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:09.544448 2994706 system_pods.go:61] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:09.544469 2994706 system_pods.go:61] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Pending
	I0908 13:59:09.544487 2994706 system_pods.go:74] duration metric: took 5.579786ms to wait for pod list to return data ...
	I0908 13:59:09.544510 2994706 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:59:09.547511 2994706 default_sa.go:45] found service account: "default"
	I0908 13:59:09.547573 2994706 default_sa.go:55] duration metric: took 3.042046ms for default service account to be created ...
	I0908 13:59:09.547596 2994706 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:59:09.553360 2994706 system_pods.go:86] 9 kube-system pods found
	I0908 13:59:09.553461 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:09.553485 2994706 system_pods.go:89] "coredns-66bc5c9577-pbsjb" [98ea4a60-ebdb-4967-903c-0cc645bcefb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:09.553518 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:09.553541 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:09.553561 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:59:09.553577 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:09.553600 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:09.553626 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:09.553649 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:59:09.553684 2994706 retry.go:31] will retry after 243.459838ms: missing components: kube-dns
	I0908 13:59:09.677721 2994706 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-207352" context rescaled to 1 replicas
	I0908 13:59:09.802000 2994706 system_pods.go:86] 9 kube-system pods found
	I0908 13:59:09.802036 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:09.802045 2994706 system_pods.go:89] "coredns-66bc5c9577-pbsjb" [98ea4a60-ebdb-4967-903c-0cc645bcefb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:09.802050 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:09.802055 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:09.802062 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:59:09.802067 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:09.802072 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:09.802076 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:09.802083 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:59:09.802101 2994706 retry.go:31] will retry after 259.303543ms: missing components: kube-dns
	I0908 13:59:10.072989 2994706 system_pods.go:86] 9 kube-system pods found
	I0908 13:59:10.073031 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:10.073042 2994706 system_pods.go:89] "coredns-66bc5c9577-pbsjb" [98ea4a60-ebdb-4967-903c-0cc645bcefb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:10.073048 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:10.073054 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:10.073060 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:59:10.073065 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:10.073070 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:10.073074 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:10.073083 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:59:10.073104 2994706 retry.go:31] will retry after 337.62897ms: missing components: kube-dns
	I0908 13:59:10.415836 2994706 system_pods.go:86] 9 kube-system pods found
	I0908 13:59:10.415883 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:10.415892 2994706 system_pods.go:89] "coredns-66bc5c9577-pbsjb" [98ea4a60-ebdb-4967-903c-0cc645bcefb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:10.415897 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:10.415903 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:10.415909 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 13:59:10.415914 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:10.415919 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:10.415924 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:10.415937 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:59:10.415952 2994706 retry.go:31] will retry after 558.857162ms: missing components: kube-dns
	I0908 13:59:10.978768 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:10.978806 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:10.978813 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:10.978819 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:10.978824 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:10.978828 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:10.978832 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:10.978836 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:10.978840 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:10.978855 2994706 retry.go:31] will retry after 709.423127ms: missing components: kube-dns
	I0908 13:59:11.692174 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:11.692210 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:11.692219 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:11.692226 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:11.692230 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:11.692235 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:11.692239 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:11.692244 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:11.692248 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:11.692263 2994706 retry.go:31] will retry after 779.121698ms: missing components: kube-dns
	I0908 13:59:12.475401 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:12.475434 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:12.475442 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:12.475448 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:12.475453 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:12.475458 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:12.475462 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:12.475466 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:12.475470 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:12.475485 2994706 retry.go:31] will retry after 801.609014ms: missing components: kube-dns
	I0908 13:59:13.281242 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:13.281276 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:13.281283 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:13.281289 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:13.281294 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:13.281298 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:13.281302 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:13.281306 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:13.281309 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:13.281324 2994706 retry.go:31] will retry after 1.242669209s: missing components: kube-dns
	I0908 13:59:14.529374 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:14.529414 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:14.529421 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:14.529427 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:14.529431 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:14.529435 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:14.529439 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:14.529443 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:14.529447 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:14.529462 2994706 retry.go:31] will retry after 1.310265839s: missing components: kube-dns
	I0908 13:59:15.843183 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:15.843224 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:15.843231 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:15.843237 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:15.843241 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:15.843245 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:15.843249 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:15.843254 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:15.843259 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:15.843272 2994706 retry.go:31] will retry after 1.576124261s: missing components: kube-dns
	I0908 13:59:17.423476 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:17.423509 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:17.423516 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:17.423523 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:17.423527 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:17.423531 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:17.423535 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:17.423539 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:17.423542 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:17.423556 2994706 retry.go:31] will retry after 2.847131202s: missing components: kube-dns
	I0908 13:59:20.275384 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:20.275419 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:20.275427 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:20.275434 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:20.275440 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:20.275444 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:20.275448 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:20.275453 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:20.275457 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:20.275470 2994706 retry.go:31] will retry after 2.658768213s: missing components: kube-dns
	I0908 13:59:22.939440 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:22.939478 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:22.939488 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:22.939495 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:22.939501 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:22.939507 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:22.939511 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:22.939515 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:22.939519 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:22.939534 2994706 retry.go:31] will retry after 4.408939252s: missing components: kube-dns
	I0908 13:59:27.353319 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:27.353356 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:27.353364 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:27.353370 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:27.353375 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:27.353396 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:27.353400 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:27.353405 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:27.353409 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:27.353426 2994706 retry.go:31] will retry after 4.746501932s: missing components: kube-dns
	I0908 13:59:32.103715 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:32.103750 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:32.103757 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:32.103763 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:32.103768 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:32.103772 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:32.103776 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:32.103780 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:32.103784 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:32.103797 2994706 retry.go:31] will retry after 5.168744546s: missing components: kube-dns
	I0908 13:59:37.278157 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:37.278197 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:37.278205 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:37.278212 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:37.278216 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:37.278220 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:37.278224 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:37.278228 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:37.278232 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:37.278247 2994706 retry.go:31] will retry after 7.800882433s: missing components: kube-dns
	I0908 13:59:45.085514 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:45.085563 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:45.085570 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:45.085578 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:45.085583 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:45.085587 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:45.085592 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:45.085596 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:45.085602 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:45.085619 2994706 retry.go:31] will retry after 10.017681341s: missing components: kube-dns
	I0908 13:59:55.107349 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 13:59:55.107390 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:59:55.107398 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 13:59:55.107404 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 13:59:55.107409 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 13:59:55.107413 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 13:59:55.107419 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 13:59:55.107424 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 13:59:55.107428 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 13:59:55.107443 2994706 retry.go:31] will retry after 11.686467016s: missing components: kube-dns
	I0908 14:00:06.798759 2994706 system_pods.go:86] 8 kube-system pods found
	I0908 14:00:06.798790 2994706 system_pods.go:89] "coredns-66bc5c9577-gpmn8" [82b8fbcf-87bd-4d06-b03b-07aefd0a4d8a] Running
	I0908 14:00:06.798798 2994706 system_pods.go:89] "etcd-auto-207352" [70ba0b18-6dc7-40ae-bf73-12af49ecf864] Running
	I0908 14:00:06.798802 2994706 system_pods.go:89] "kindnet-kgbps" [c2eafa25-f0d9-490e-b6bb-40bf839c8d00] Running
	I0908 14:00:06.798806 2994706 system_pods.go:89] "kube-apiserver-auto-207352" [ec16f083-53aa-4536-8e27-19b6811ec7cf] Running
	I0908 14:00:06.798811 2994706 system_pods.go:89] "kube-controller-manager-auto-207352" [7023b810-7a90-4222-8267-0b9a150c28b9] Running
	I0908 14:00:06.798816 2994706 system_pods.go:89] "kube-proxy-ljlrw" [ee7772c2-ff62-4b5b-827f-a281fb6e7ed4] Running
	I0908 14:00:06.798819 2994706 system_pods.go:89] "kube-scheduler-auto-207352" [a82f4dee-befb-408e-b73b-6e29ee74e705] Running
	I0908 14:00:06.798823 2994706 system_pods.go:89] "storage-provisioner" [b2986156-2c0f-4873-a6d4-512ef6dba9a7] Running
	I0908 14:00:06.798831 2994706 system_pods.go:126] duration metric: took 57.251218301s to wait for k8s-apps to be running ...
	I0908 14:00:06.798841 2994706 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 14:00:06.798899 2994706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:00:06.811756 2994706 system_svc.go:56] duration metric: took 12.904721ms WaitForService to wait for kubelet
	I0908 14:00:06.811785 2994706 kubeadm.go:578] duration metric: took 58.629529824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 14:00:06.811804 2994706 node_conditions.go:102] verifying NodePressure condition ...
	I0908 14:00:06.814810 2994706 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 14:00:06.814845 2994706 node_conditions.go:123] node cpu capacity is 2
	I0908 14:00:06.814859 2994706 node_conditions.go:105] duration metric: took 3.05007ms to run NodePressure ...
	I0908 14:00:06.814873 2994706 start.go:241] waiting for startup goroutines ...
	I0908 14:00:06.814881 2994706 start.go:246] waiting for cluster config update ...
	I0908 14:00:06.814892 2994706 start.go:255] writing updated cluster config ...
	I0908 14:00:06.815196 2994706 ssh_runner.go:195] Run: rm -f paused
	I0908 14:00:06.818689 2994706 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 14:00:06.822118 2994706 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gpmn8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:06.827302 2994706 pod_ready.go:94] pod "coredns-66bc5c9577-gpmn8" is "Ready"
	I0908 14:00:06.827333 2994706 pod_ready.go:86] duration metric: took 5.184057ms for pod "coredns-66bc5c9577-gpmn8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:06.830142 2994706 pod_ready.go:83] waiting for pod "etcd-auto-207352" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:06.835031 2994706 pod_ready.go:94] pod "etcd-auto-207352" is "Ready"
	I0908 14:00:06.835062 2994706 pod_ready.go:86] duration metric: took 4.892405ms for pod "etcd-auto-207352" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:06.837444 2994706 pod_ready.go:83] waiting for pod "kube-apiserver-auto-207352" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:06.842913 2994706 pod_ready.go:94] pod "kube-apiserver-auto-207352" is "Ready"
	I0908 14:00:06.842943 2994706 pod_ready.go:86] duration metric: took 5.440796ms for pod "kube-apiserver-auto-207352" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:06.845392 2994706 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-207352" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:07.223074 2994706 pod_ready.go:94] pod "kube-controller-manager-auto-207352" is "Ready"
	I0908 14:00:07.223101 2994706 pod_ready.go:86] duration metric: took 377.684483ms for pod "kube-controller-manager-auto-207352" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:07.423592 2994706 pod_ready.go:83] waiting for pod "kube-proxy-ljlrw" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:07.823525 2994706 pod_ready.go:94] pod "kube-proxy-ljlrw" is "Ready"
	I0908 14:00:07.823554 2994706 pod_ready.go:86] duration metric: took 399.931417ms for pod "kube-proxy-ljlrw" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:08.023312 2994706 pod_ready.go:83] waiting for pod "kube-scheduler-auto-207352" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:08.423313 2994706 pod_ready.go:94] pod "kube-scheduler-auto-207352" is "Ready"
	I0908 14:00:08.423339 2994706 pod_ready.go:86] duration metric: took 399.982199ms for pod "kube-scheduler-auto-207352" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 14:00:08.423352 2994706 pod_ready.go:40] duration metric: took 1.604630942s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 14:00:08.484155 2994706 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 14:00:08.487397 2994706 out.go:179] * Done! kubectl is now configured to use "auto-207352" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	6ef2ff9d9d76f       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   8                   49598c4430788       dashboard-metrics-scraper-6ffb444bf9-vqcnl
	afd51157a2796       ba04bb24b9575       17 minutes ago      Running             storage-provisioner         2                   5e3df7e102b84       storage-provisioner
	5932df6e2b405       138784d87c9c5       18 minutes ago      Running             coredns                     1                   51cf25b2438a7       coredns-66bc5c9577-q4jkl
	656877048dd4b       1611cd07b61d5       18 minutes ago      Running             busybox                     1                   d84100c9117ec       busybox
	27e765ee1007c       ba04bb24b9575       18 minutes ago      Exited              storage-provisioner         1                   5e3df7e102b84       storage-provisioner
	4195b54b5b0a0       6fc32d66c1411       18 minutes ago      Running             kube-proxy                  1                   dab1f4a92dfc6       kube-proxy-hqvv8
	70096c3780988       b1a8c6f707935       18 minutes ago      Running             kindnet-cni                 1                   ce0e6d115f32d       kindnet-pmj4f
	c47ed1a29ab5d       a25f5ef9c34c3       18 minutes ago      Running             kube-scheduler              1                   73062379e656d       kube-scheduler-default-k8s-diff-port-153420
	492498c3db6a0       a1894772a478e       18 minutes ago      Running             etcd                        1                   2abaf138462ab       etcd-default-k8s-diff-port-153420
	8e83b460fd12c       996be7e86d9b3       18 minutes ago      Running             kube-controller-manager     1                   8c4dccbbe72da       kube-controller-manager-default-k8s-diff-port-153420
	c34a3fdb0a185       d291939e99406       18 minutes ago      Running             kube-apiserver              1                   e63385c5c0e8a       kube-apiserver-default-k8s-diff-port-153420
	a22e36e812d1d       1611cd07b61d5       19 minutes ago      Exited              busybox                     0                   16bd4360afd4a       busybox
	c92bcab10a87a       138784d87c9c5       19 minutes ago      Exited              coredns                     0                   2853b9c0457ae       coredns-66bc5c9577-q4jkl
	c83f774f7cb15       6fc32d66c1411       20 minutes ago      Exited              kube-proxy                  0                   529eff64180e1       kube-proxy-hqvv8
	1e493f2b781b5       b1a8c6f707935       20 minutes ago      Exited              kindnet-cni                 0                   862b9957c2425       kindnet-pmj4f
	560003c0df9ab       a25f5ef9c34c3       20 minutes ago      Exited              kube-scheduler              0                   531cfaddc4c41       kube-scheduler-default-k8s-diff-port-153420
	80d5d4731aabe       a1894772a478e       20 minutes ago      Exited              etcd                        0                   ec40d54622748       etcd-default-k8s-diff-port-153420
	b4369cf69b787       996be7e86d9b3       20 minutes ago      Exited              kube-controller-manager     0                   d88e303662fdf       kube-controller-manager-default-k8s-diff-port-153420
	4566249845692       d291939e99406       20 minutes ago      Exited              kube-apiserver              0                   b32e5ed6cf61f       kube-apiserver-default-k8s-diff-port-153420
	
	
	==> containerd <==
	Sep 08 13:52:58 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:52:58.311638494Z" level=info msg="StartContainer for \"3f4f0fbe175078403daca011f3a93fe22ff54599a702c12c1524c82af3855cb2\" returns successfully"
	Sep 08 13:52:58 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:52:58.334612507Z" level=info msg="shim disconnected" id=3f4f0fbe175078403daca011f3a93fe22ff54599a702c12c1524c82af3855cb2 namespace=k8s.io
	Sep 08 13:52:58 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:52:58.334882833Z" level=warning msg="cleaning up after shim disconnected" id=3f4f0fbe175078403daca011f3a93fe22ff54599a702c12c1524c82af3855cb2 namespace=k8s.io
	Sep 08 13:52:58 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:52:58.335042591Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:52:59 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:52:59.252209187Z" level=info msg="RemoveContainer for \"c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b\""
	Sep 08 13:52:59 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:52:59.260613781Z" level=info msg="RemoveContainer for \"c5409ca9f6838eb2da308e2c984acbd5f399e20447c9a808d6b8d1b9bbb5c79b\" returns successfully"
	Sep 08 13:57:46 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:57:46.199477105Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 08 13:57:46 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:57:46.208659404Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 08 13:57:46 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:57:46.211010409Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 08 13:57:46 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:57:46.211026146Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 08 13:58:06 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:06.198112533Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:58:06 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:06.200704623Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:58:06 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:06.329491390Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 08 13:58:06 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:06.592573139Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 13:58:06 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:06.592885352Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 08 13:58:09 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:09.201535364Z" level=info msg="CreateContainer within sandbox \"49598c4430788a9d119f7b7ab2c6706371a94b754f1b213f97db32619efb77bf\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Sep 08 13:58:09 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:09.233059641Z" level=info msg="CreateContainer within sandbox \"49598c4430788a9d119f7b7ab2c6706371a94b754f1b213f97db32619efb77bf\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f\""
	Sep 08 13:58:09 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:09.233902800Z" level=info msg="StartContainer for \"6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f\""
	Sep 08 13:58:09 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:09.289867020Z" level=info msg="StartContainer for \"6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f\" returns successfully"
	Sep 08 13:58:09 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:09.290106217Z" level=info msg="received exit event container_id:\"6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f\" id:\"6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f\" pid:3280 exit_status:255 exited_at:{seconds:1757339889 nanos:287205788}"
	Sep 08 13:58:09 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:09.315779675Z" level=info msg="shim disconnected" id=6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f namespace=k8s.io
	Sep 08 13:58:09 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:09.315822685Z" level=warning msg="cleaning up after shim disconnected" id=6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f namespace=k8s.io
	Sep 08 13:58:09 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:09.315859091Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 08 13:58:10 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:10.097368146Z" level=info msg="RemoveContainer for \"3f4f0fbe175078403daca011f3a93fe22ff54599a702c12c1524c82af3855cb2\""
	Sep 08 13:58:10 default-k8s-diff-port-153420 containerd[573]: time="2025-09-08T13:58:10.104265744Z" level=info msg="RemoveContainer for \"3f4f0fbe175078403daca011f3a93fe22ff54599a702c12c1524c82af3855cb2\" returns successfully"
	
	
	==> coredns [5932df6e2b405dddbb51b36a68d76e388c1f6515d5ffe6bceda39f8ccd97e16b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33147 - 49043 "HINFO IN 5771053844341058191.1937657061796243996. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015527834s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [c92bcab10a87ad0871a640aa80b2152b1fd54da7db385c67cbfd5b5c9f43fe8e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35341 - 23060 "HINFO IN 6672547217869353072.3588437659851635492. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039998722s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-153420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-153420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=default-k8s-diff-port-153420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_40_13_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:40:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-153420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 14:00:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:58:51 +0000   Mon, 08 Sep 2025 13:40:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:58:51 +0000   Mon, 08 Sep 2025 13:40:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:58:51 +0000   Mon, 08 Sep 2025 13:40:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:58:51 +0000   Mon, 08 Sep 2025 13:40:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-153420
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0597b83646154e3a8b13dcf3c073e745
	  System UUID:                7ee334b3-e360-4fac-af6e-409ec1db3f12
	  Boot ID:                    9f5228b8-b58e-4b72-938a-84f5f7e9d841
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-q4jkl                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     20m
	  kube-system                 etcd-default-k8s-diff-port-153420                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         20m
	  kube-system                 kindnet-pmj4f                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-153420             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-153420    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-hqvv8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-153420             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-746fcd58dc-5wgrj                         100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vqcnl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kqtp5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           20m                node-controller  Node default-k8s-diff-port-153420 event: Registered Node default-k8s-diff-port-153420 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-153420 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-153420 event: Registered Node default-k8s-diff-port-153420 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [492498c3db6a02a8e65f38b9e66bd193f09ff7919b4f8541bf2885a912547779] <==
	{"level":"warn","ts":"2025-09-08T13:41:49.663511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.677688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.719397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.741765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.758878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.782591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.813925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.836444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.849451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.871546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.896446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.918597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.940943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.962679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:49.990769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:50.046764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:50.066460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:50.079397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:50.150493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38550","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:51:48.713176Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1059}
	{"level":"info","ts":"2025-09-08T13:51:48.737739Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1059,"took":"24.132571ms","hash":3701126627,"current-db-size-bytes":3272704,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1359872,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-08T13:51:48.737792Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3701126627,"revision":1059,"compact-revision":-1}
	{"level":"info","ts":"2025-09-08T13:56:48.719895Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1315}
	{"level":"info","ts":"2025-09-08T13:56:48.723409Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1315,"took":"3.226729ms","hash":2362328005,"current-db-size-bytes":3272704,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-08T13:56:48.723456Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2362328005,"revision":1315,"compact-revision":1059}
	
	
	==> etcd [80d5d4731aabeab6901bda592deb55b198b5210763d8c88d00d21c2622dc391c] <==
	{"level":"warn","ts":"2025-09-08T13:40:08.338958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.351905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.368152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.387629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.405176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.424084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.449136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.461621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.538731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.552130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.574819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.588356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.606252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.625370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.661828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.669186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.693039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.713551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.729613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.750735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.772942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.800753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.819321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.840042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:40:08.941700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46108","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:00:35 up 19:43,  0 users,  load average: 2.28, 1.49, 1.16
	Linux default-k8s-diff-port-153420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1e493f2b781b56da57b2e825d02a2a9f27a28c4bc9bd840c5b5e3a91641a57fd] <==
	I0908 13:40:18.909621       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:40:18.910936       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0908 13:40:18.911098       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:40:18.911116       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:40:18.911128       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:40:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:40:19.198890       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:40:19.199103       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:40:19.199203       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:40:19.202239       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0908 13:40:49.114492       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0908 13:40:49.199179       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0908 13:40:49.201740       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0908 13:40:49.203897       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0908 13:40:50.401201       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:40:50.401232       1 metrics.go:72] Registering metrics
	I0908 13:40:50.401306       1 controller.go:711] "Syncing nftables rules"
	I0908 13:40:59.118426       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:40:59.118483       1 main.go:301] handling current node
	I0908 13:41:09.121559       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:41:09.121597       1 main.go:301] handling current node
	I0908 13:41:19.114475       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:41:19.114505       1 main.go:301] handling current node
	
	
	==> kindnet [70096c3780988aad95584c16901486f94adad14bb3aa2c833fda07b0908c74a2] <==
	I0908 13:58:33.713631       1 main.go:301] handling current node
	I0908 13:58:43.715831       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:58:43.718368       1 main.go:301] handling current node
	I0908 13:58:53.714603       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:58:53.714636       1 main.go:301] handling current node
	I0908 13:59:03.712849       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:59:03.712886       1 main.go:301] handling current node
	I0908 13:59:13.721075       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:59:13.721174       1 main.go:301] handling current node
	I0908 13:59:23.712748       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:59:23.712784       1 main.go:301] handling current node
	I0908 13:59:33.713156       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:59:33.713192       1 main.go:301] handling current node
	I0908 13:59:43.718486       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:59:43.718523       1 main.go:301] handling current node
	I0908 13:59:53.714679       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 13:59:53.714767       1 main.go:301] handling current node
	I0908 14:00:03.712713       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 14:00:03.712750       1 main.go:301] handling current node
	I0908 14:00:13.721480       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 14:00:13.721514       1 main.go:301] handling current node
	I0908 14:00:23.713257       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 14:00:23.713308       1 main.go:301] handling current node
	I0908 14:00:33.713338       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 14:00:33.713386       1 main.go:301] handling current node
	
	
	==> kube-apiserver [45662498456929054ae3b40fdbb16f05b31d0498f1bdb935defcd60bd0a73b31] <==
	I0908 13:40:12.499860       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 13:40:17.030815       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:40:17.038530       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:40:17.479401       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 13:40:17.892873       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0908 13:41:16.871392       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 13:41:24.854106       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:57768: use of closed network connection
	I0908 13:41:25.637791       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 13:41:25.646616       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:41:25.646678       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0908 13:41:25.646729       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0908 13:41:25.799622       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.101.146.193"}
	W0908 13:41:25.831434       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:41:25.831492       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0908 13:41:25.856652       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:41:25.856704       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [c34a3fdb0a1851debb2ad4bb26c0242412b330f7610f2cf5f7f1f1d21c2d5230] <==
	I0908 13:57:03.450655       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:57:52.349099       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:57:52.349152       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:57:52.349165       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:57:52.350478       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:57:52.350604       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:57:52.350623       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 13:58:02.342794       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:58:09.777896       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:59:02.711079       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:59:25.343534       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 13:59:52.349717       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:59:52.349771       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 13:59:52.349785       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 13:59:52.350787       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 13:59:52.350934       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 13:59:52.350953       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 14:00:17.671042       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:00:29.270143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [8e83b460fd12c11caabaccc49bab4239aaaf8183b1f6a397ef6d4615f6075ca8] <==
	I0908 13:54:27.044099       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:54:56.969195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:54:57.052190       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:55:26.973551       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:55:27.059985       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:55:56.978209       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:55:57.066975       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:56:26.982912       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:56:27.073951       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:56:57.002236       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:56:57.081106       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:57:27.008164       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:57:27.088008       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:57:57.013244       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:57:57.095726       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:58:27.020329       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:58:27.104494       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:58:57.050393       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:58:57.116653       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:59:27.055812       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:59:27.130009       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 13:59:57.060761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:59:57.138260       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 14:00:27.066389       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 14:00:27.147151       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [b4369cf69b787c103dc54593695e0c7b6fd035f36d82f02245556ec1854ce8d4] <==
	I0908 13:40:16.872883       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 13:40:16.873239       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 13:40:16.873629       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 13:40:16.873987       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 13:40:16.873644       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 13:40:16.875846       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 13:40:16.875921       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 13:40:16.875947       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 13:40:16.876139       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 13:40:16.877605       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 13:40:16.881083       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0908 13:40:16.881154       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0908 13:40:16.881182       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 13:40:16.881231       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 13:40:16.881243       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 13:40:16.881249       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 13:40:16.881815       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 13:40:16.891193       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-153420" podCIDRs=["10.244.0.0/24"]
	I0908 13:40:16.905512       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:40:16.915804       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0908 13:40:16.921674       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:40:16.921697       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 13:40:16.921704       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 13:40:16.923783       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 13:40:16.924295       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [4195b54b5b0a07dcb07b8c63a6551ef910c664c9fb580ab96eea8a50122fd9d9] <==
	I0908 13:41:53.634456       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:41:53.732937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:41:53.833753       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:41:53.833795       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0908 13:41:53.833874       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:41:53.985161       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:41:53.985214       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:41:53.995651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:41:53.996020       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:41:53.996043       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:41:53.997223       1 config.go:200] "Starting service config controller"
	I0908 13:41:53.997238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:41:54.006839       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:41:54.006876       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:41:54.006937       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:41:54.006949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:41:54.015533       1 config.go:309] "Starting node config controller"
	I0908 13:41:54.015563       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:41:54.015571       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:41:54.098262       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:41:54.107130       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:41:54.107173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c83f774f7cb15ec989c3b862182b8ada28a7212590cabe73828ad3cbc652e5c9] <==
	I0908 13:40:18.904828       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:40:19.014183       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:40:19.114952       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:40:19.115032       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0908 13:40:19.115102       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:40:19.137502       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:40:19.137735       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:40:19.141741       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:40:19.143612       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:40:19.147158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:40:19.148754       1 config.go:200] "Starting service config controller"
	I0908 13:40:19.148764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:40:19.148777       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:40:19.148781       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:40:19.148795       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:40:19.148798       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:40:19.152366       1 config.go:309] "Starting node config controller"
	I0908 13:40:19.159742       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:40:19.159824       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:40:19.249931       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:40:19.250171       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 13:40:19.249868       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [560003c0df9ab37f0413fad8d4cfa177c37cab99ce50a679a4dfe3d45f9b9806] <==
	E0908 13:40:10.061570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:40:10.061968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 13:40:10.062175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 13:40:10.062463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0908 13:40:10.064515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:40:10.064854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 13:40:10.065067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:40:10.065455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:40:10.065655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:40:10.065870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:40:10.066117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 13:40:10.066492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:40:10.066688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 13:40:10.068636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 13:40:10.889343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:40:10.926051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:40:10.927938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 13:40:10.931914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:40:10.974020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 13:40:11.030684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 13:40:11.035457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:40:11.045750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 13:40:11.099010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:40:11.296650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0908 13:40:13.125024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c47ed1a29ab5d937dcdc5fb5186575edfb00bccfda6c84f0d9428103484bf684] <==
	I0908 13:41:51.134333       1 serving.go:386] Generated self-signed cert in-memory
	I0908 13:41:52.685007       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:41:52.685214       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:41:52.701110       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:41:52.701181       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 13:41:52.701213       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 13:41:52.701334       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:41:52.701350       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:41:52.701291       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:41:52.701550       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:41:52.701566       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:41:52.802897       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:41:52.802957       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 13:41:52.803049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:59:11 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:11.199232     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:59:17 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:17.198435     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 13:59:17 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:17.198829     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 13:59:26 default-k8s-diff-port-153420 kubelet[662]: I0908 13:59:26.197108     662 scope.go:117] "RemoveContainer" containerID="6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f"
	Sep 08 13:59:26 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:26.197811     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:59:30 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:30.197648     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 13:59:31 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:31.197601     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 13:59:39 default-k8s-diff-port-153420 kubelet[662]: I0908 13:59:39.197712     662 scope.go:117] "RemoveContainer" containerID="6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f"
	Sep 08 13:59:39 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:39.198458     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:59:44 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:44.198600     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 13:59:45 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:45.198600     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 13:59:52 default-k8s-diff-port-153420 kubelet[662]: I0908 13:59:52.197366     662 scope.go:117] "RemoveContainer" containerID="6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f"
	Sep 08 13:59:52 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:52.197553     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 13:59:59 default-k8s-diff-port-153420 kubelet[662]: E0908 13:59:59.198166     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 14:00:00 default-k8s-diff-port-153420 kubelet[662]: E0908 14:00:00.208008     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 14:00:06 default-k8s-diff-port-153420 kubelet[662]: I0908 14:00:06.196943     662 scope.go:117] "RemoveContainer" containerID="6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f"
	Sep 08 14:00:06 default-k8s-diff-port-153420 kubelet[662]: E0908 14:00:06.197208     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 14:00:12 default-k8s-diff-port-153420 kubelet[662]: E0908 14:00:12.198048     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 14:00:14 default-k8s-diff-port-153420 kubelet[662]: E0908 14:00:14.198584     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 14:00:20 default-k8s-diff-port-153420 kubelet[662]: I0908 14:00:20.197390     662 scope.go:117] "RemoveContainer" containerID="6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f"
	Sep 08 14:00:20 default-k8s-diff-port-153420 kubelet[662]: E0908 14:00:20.197593     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	Sep 08 14:00:26 default-k8s-diff-port-153420 kubelet[662]: E0908 14:00:26.199355     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kqtp5" podUID="1d2664d7-c720-46c0-8112-72e5cac1495d"
	Sep 08 14:00:26 default-k8s-diff-port-153420 kubelet[662]: E0908 14:00:26.199755     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-5wgrj" podUID="0d04be90-cd69-469c-a678-e605db608f3a"
	Sep 08 14:00:32 default-k8s-diff-port-153420 kubelet[662]: I0908 14:00:32.197147     662 scope.go:117] "RemoveContainer" containerID="6ef2ff9d9d76ff01e5e9a6f186d7b695cd1677ba8bc77d9f34bab8567128db2f"
	Sep 08 14:00:32 default-k8s-diff-port-153420 kubelet[662]: E0908 14:00:32.197820     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqcnl_kubernetes-dashboard(f613d421-0eca-4922-b55b-eec7afcaa635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqcnl" podUID="f613d421-0eca-4922-b55b-eec7afcaa635"
	
	
	==> storage-provisioner [27e765ee1007c29808a576a6c1ad0934eff58f69c16eae2340cc76c4d969d1b9] <==
	I0908 13:41:53.678623       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 13:42:23.680287       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [afd51157a2796b9198ad66e7adc0e148bfcc67af22e2524e29b920137c01342b] <==
	W0908 14:00:10.900995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:12.904519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:12.909233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:14.912848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:14.920705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:16.923747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:16.928723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:18.931713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:18.936208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:20.938861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:20.947106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:22.952878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:22.957847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:24.962980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:24.967685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:26.972636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:26.982894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:28.990864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:28.998125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:31.003836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:31.013899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:33.018452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:33.033706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:35.037362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:00:35.046513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-5wgrj kubernetes-dashboard-855c9754f9-kqtp5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 describe pod metrics-server-746fcd58dc-5wgrj kubernetes-dashboard-855c9754f9-kqtp5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-153420 describe pod metrics-server-746fcd58dc-5wgrj kubernetes-dashboard-855c9754f9-kqtp5: exit status 1 (130.827517ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-5wgrj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-kqtp5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-153420 describe pod metrics-server-746fcd58dc-5wgrj kubernetes-dashboard-855c9754f9-kqtp5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (544.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (930.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0908 14:01:16.680943 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:16.687216 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:16.698593 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:16.719994 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:16.761380 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:16.842754 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:17.004121 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:17.325871 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:17.967427 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:19.249099 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:21.810708 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:26.932631 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:34.195422 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:37.174035 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:57.655389 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p calico-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (15m30.170490031s)

                                                
                                                
-- stdout --
	* [calico-207352] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-207352" primary control-plane node in "calico-207352" cluster
	* Pulling base image v0.0.47-1756980985-21488 ...
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:00:45.762689 3002591 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:00:45.762917 3002591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:00:45.762945 3002591 out.go:374] Setting ErrFile to fd 2...
	I0908 14:00:45.762966 3002591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:00:45.763243 3002591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 14:00:45.763690 3002591 out.go:368] Setting JSON to false
	I0908 14:00:45.764606 3002591 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70998,"bootTime":1757269048,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 14:00:45.764700 3002591 start.go:140] virtualization:  
	I0908 14:00:45.770601 3002591 out.go:179] * [calico-207352] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 14:00:45.775017 3002591 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 14:00:45.775088 3002591 notify.go:220] Checking for updates...
	I0908 14:00:45.784141 3002591 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:00:45.787979 3002591 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 14:00:45.792458 3002591 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 14:00:45.796300 3002591 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 14:00:45.800024 3002591 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:00:45.804181 3002591 config.go:182] Loaded profile config "kindnet-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:00:45.804348 3002591 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:00:45.842925 3002591 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:00:45.843042 3002591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:00:45.937280 3002591 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-09-08 14:00:45.928017324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:00:45.937397 3002591 docker.go:318] overlay module found
	I0908 14:00:45.943960 3002591 out.go:179] * Using the docker driver based on user configuration
	I0908 14:00:45.947376 3002591 start.go:304] selected driver: docker
	I0908 14:00:45.947402 3002591 start.go:918] validating driver "docker" against <nil>
	I0908 14:00:45.947416 3002591 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:00:45.948191 3002591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:00:46.040027 3002591 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:58 SystemTime:2025-09-08 14:00:46.030885217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:00:46.040183 3002591 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 14:00:46.040422 3002591 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 14:00:46.043832 3002591 out.go:179] * Using Docker driver with root privileges
	I0908 14:00:46.047066 3002591 cni.go:84] Creating CNI manager for "calico"
	I0908 14:00:46.047094 3002591 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0908 14:00:46.047185 3002591 start.go:348] cluster config:
	{Name:calico-207352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-207352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:00:46.052616 3002591 out.go:179] * Starting "calico-207352" primary control-plane node in "calico-207352" cluster
	I0908 14:00:46.055795 3002591 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 14:00:46.059097 3002591 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 14:00:46.062038 3002591 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 14:00:46.062119 3002591 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0908 14:00:46.062131 3002591 cache.go:58] Caching tarball of preloaded images
	I0908 14:00:46.062138 3002591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 14:00:46.062237 3002591 preload.go:172] Found /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 14:00:46.062249 3002591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0908 14:00:46.062408 3002591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/config.json ...
	I0908 14:00:46.062437 3002591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/config.json: {Name:mkfeda2a8884c5462281420376479bce7b1a8593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:00:46.100283 3002591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 14:00:46.100308 3002591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 14:00:46.100326 3002591 cache.go:232] Successfully downloaded all kic artifacts
	I0908 14:00:46.100366 3002591 start.go:360] acquireMachinesLock for calico-207352: {Name:mk278adfd59987db9970acb1517c2cea00be6396 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:00:46.100489 3002591 start.go:364] duration metric: took 100.953µs to acquireMachinesLock for "calico-207352"
	I0908 14:00:46.100521 3002591 start.go:93] Provisioning new machine with config: &{Name:calico-207352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-207352 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 14:00:46.100595 3002591 start.go:125] createHost starting for "" (driver="docker")
	I0908 14:00:46.104361 3002591 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0908 14:00:46.104607 3002591 start.go:159] libmachine.API.Create for "calico-207352" (driver="docker")
	I0908 14:00:46.104654 3002591 client.go:168] LocalClient.Create starting
	I0908 14:00:46.104743 3002591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem
	I0908 14:00:46.104788 3002591 main.go:141] libmachine: Decoding PEM data...
	I0908 14:00:46.104808 3002591 main.go:141] libmachine: Parsing certificate...
	I0908 14:00:46.104861 3002591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem
	I0908 14:00:46.104884 3002591 main.go:141] libmachine: Decoding PEM data...
	I0908 14:00:46.104899 3002591 main.go:141] libmachine: Parsing certificate...
	I0908 14:00:46.105258 3002591 cli_runner.go:164] Run: docker network inspect calico-207352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 14:00:46.137876 3002591 cli_runner.go:211] docker network inspect calico-207352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 14:00:46.137967 3002591 network_create.go:284] running [docker network inspect calico-207352] to gather additional debugging logs...
	I0908 14:00:46.137997 3002591 cli_runner.go:164] Run: docker network inspect calico-207352
	W0908 14:00:46.168283 3002591 cli_runner.go:211] docker network inspect calico-207352 returned with exit code 1
	I0908 14:00:46.168325 3002591 network_create.go:287] error running [docker network inspect calico-207352]: docker network inspect calico-207352: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-207352 not found
	I0908 14:00:46.168354 3002591 network_create.go:289] output of [docker network inspect calico-207352]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-207352 not found
	
	** /stderr **
	I0908 14:00:46.168672 3002591 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 14:00:46.198974 3002591 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e8e4d680522a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e6:fc:21:01:0d:7c} reservation:<nil>}
	I0908 14:00:46.199265 3002591 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f4041ba326ec IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:98:55:eb:1e:7f} reservation:<nil>}
	I0908 14:00:46.199555 3002591 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-24188c4f7034 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:e3:11:f1:6e:19} reservation:<nil>}
	I0908 14:00:46.199799 3002591 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c2f22d1c3adf IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:12:6b:6e:d6:12} reservation:<nil>}
	I0908 14:00:46.200178 3002591 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195fda0}
	I0908 14:00:46.200196 3002591 network_create.go:124] attempt to create docker network calico-207352 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0908 14:00:46.200252 3002591 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-207352 calico-207352
	I0908 14:00:46.286460 3002591 network_create.go:108] docker network calico-207352 192.168.85.0/24 created
	I0908 14:00:46.286493 3002591 kic.go:121] calculated static IP "192.168.85.2" for the "calico-207352" container
	I0908 14:00:46.286582 3002591 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 14:00:46.308548 3002591 cli_runner.go:164] Run: docker volume create calico-207352 --label name.minikube.sigs.k8s.io=calico-207352 --label created_by.minikube.sigs.k8s.io=true
	I0908 14:00:46.325496 3002591 oci.go:103] Successfully created a docker volume calico-207352
	I0908 14:00:46.325590 3002591 cli_runner.go:164] Run: docker run --rm --name calico-207352-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-207352 --entrypoint /usr/bin/test -v calico-207352:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 14:00:48.855906 3002591 cli_runner.go:217] Completed: docker run --rm --name calico-207352-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-207352 --entrypoint /usr/bin/test -v calico-207352:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (2.530275278s)
	I0908 14:00:48.855941 3002591 oci.go:107] Successfully prepared a docker volume calico-207352
	I0908 14:00:48.855968 3002591 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 14:00:48.855986 3002591 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 14:00:48.856061 3002591 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-207352:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 14:00:52.894156 3002591 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-207352:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.038045931s)
	I0908 14:00:52.894185 3002591 kic.go:203] duration metric: took 4.03819512s to extract preloaded images to volume ...
	W0908 14:00:52.894310 3002591 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 14:00:52.894595 3002591 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 14:00:52.980888 3002591 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-207352 --name calico-207352 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-207352 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-207352 --network calico-207352 --ip 192.168.85.2 --volume calico-207352:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 14:00:53.380784 3002591 cli_runner.go:164] Run: docker container inspect calico-207352 --format={{.State.Running}}
	I0908 14:00:53.405261 3002591 cli_runner.go:164] Run: docker container inspect calico-207352 --format={{.State.Status}}
	I0908 14:00:53.447048 3002591 cli_runner.go:164] Run: docker exec calico-207352 stat /var/lib/dpkg/alternatives/iptables
	I0908 14:00:53.503937 3002591 oci.go:144] the created container "calico-207352" has a running status.
	I0908 14:00:53.503973 3002591 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/calico-207352/id_rsa...
	I0908 14:00:53.787760 3002591 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/calico-207352/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 14:00:53.826080 3002591 cli_runner.go:164] Run: docker container inspect calico-207352 --format={{.State.Status}}
	I0908 14:00:53.852205 3002591 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 14:00:53.852228 3002591 kic_runner.go:114] Args: [docker exec --privileged calico-207352 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 14:00:53.945151 3002591 cli_runner.go:164] Run: docker container inspect calico-207352 --format={{.State.Status}}
	I0908 14:00:53.978096 3002591 machine.go:93] provisionDockerMachine start ...
	I0908 14:00:53.978196 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:00:54.008296 3002591 main.go:141] libmachine: Using SSH client type: native
	I0908 14:00:54.008653 3002591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36778 <nil> <nil>}
	I0908 14:00:54.008664 3002591 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 14:00:54.009506 3002591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0908 14:00:57.153840 3002591 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-207352
	
	I0908 14:00:57.153865 3002591 ubuntu.go:182] provisioning hostname "calico-207352"
	I0908 14:00:57.153930 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:00:57.183193 3002591 main.go:141] libmachine: Using SSH client type: native
	I0908 14:00:57.183502 3002591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36778 <nil> <nil>}
	I0908 14:00:57.183513 3002591 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-207352 && echo "calico-207352" | sudo tee /etc/hostname
	I0908 14:00:57.339508 3002591 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-207352
	
	I0908 14:00:57.339651 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:00:57.364006 3002591 main.go:141] libmachine: Using SSH client type: native
	I0908 14:00:57.364310 3002591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 36778 <nil> <nil>}
	I0908 14:00:57.364327 3002591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-207352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-207352/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-207352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:00:57.494620 3002591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:00:57.494694 3002591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-2749258/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-2749258/.minikube}
	I0908 14:00:57.494738 3002591 ubuntu.go:190] setting up certificates
	I0908 14:00:57.494776 3002591 provision.go:84] configureAuth start
	I0908 14:00:57.494871 3002591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-207352
	I0908 14:00:57.514864 3002591 provision.go:143] copyHostCerts
	I0908 14:00:57.514942 3002591 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem, removing ...
	I0908 14:00:57.514952 3002591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem
	I0908 14:00:57.515031 3002591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.pem (1082 bytes)
	I0908 14:00:57.515128 3002591 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem, removing ...
	I0908 14:00:57.515133 3002591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem
	I0908 14:00:57.515159 3002591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/cert.pem (1123 bytes)
	I0908 14:00:57.515220 3002591 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem, removing ...
	I0908 14:00:57.515229 3002591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem
	I0908 14:00:57.515256 3002591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-2749258/.minikube/key.pem (1679 bytes)
	I0908 14:00:57.515311 3002591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem org=jenkins.calico-207352 san=[127.0.0.1 192.168.85.2 calico-207352 localhost minikube]
	I0908 14:00:58.003201 3002591 provision.go:177] copyRemoteCerts
	I0908 14:00:58.003317 3002591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:00:58.003389 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:00:58.028261 3002591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36778 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/calico-207352/id_rsa Username:docker}
	I0908 14:00:58.128126 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 14:00:58.154319 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:00:58.180940 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 14:00:58.207586 3002591 provision.go:87] duration metric: took 712.769066ms to configureAuth
	I0908 14:00:58.207662 3002591 ubuntu.go:206] setting minikube options for container-runtime
	I0908 14:00:58.207910 3002591 config.go:182] Loaded profile config "calico-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:00:58.207939 3002591 machine.go:96] duration metric: took 4.229825493s to provisionDockerMachine
	I0908 14:00:58.207974 3002591 client.go:171] duration metric: took 12.103294991s to LocalClient.Create
	I0908 14:00:58.208006 3002591 start.go:167] duration metric: took 12.103400235s to libmachine.API.Create "calico-207352"
	I0908 14:00:58.208026 3002591 start.go:293] postStartSetup for "calico-207352" (driver="docker")
	I0908 14:00:58.208062 3002591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:00:58.208146 3002591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:00:58.208218 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:00:58.235675 3002591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36778 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/calico-207352/id_rsa Username:docker}
	I0908 14:00:58.328698 3002591 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:00:58.332595 3002591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 14:00:58.332628 3002591 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 14:00:58.332638 3002591 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 14:00:58.332646 3002591 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 14:00:58.332656 3002591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/addons for local assets ...
	I0908 14:00:58.332708 3002591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-2749258/.minikube/files for local assets ...
	I0908 14:00:58.332791 3002591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem -> 27511142.pem in /etc/ssl/certs
	I0908 14:00:58.332893 3002591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:00:58.342164 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 14:00:58.369750 3002591 start.go:296] duration metric: took 161.682865ms for postStartSetup
	I0908 14:00:58.370242 3002591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-207352
	I0908 14:00:58.393522 3002591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/config.json ...
	I0908 14:00:58.393806 3002591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:00:58.393849 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:00:58.419921 3002591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36778 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/calico-207352/id_rsa Username:docker}
	I0908 14:00:58.507867 3002591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 14:00:58.513275 3002591 start.go:128] duration metric: took 12.412662278s to createHost
	I0908 14:00:58.513308 3002591 start.go:83] releasing machines lock for "calico-207352", held for 12.412805831s
	I0908 14:00:58.513382 3002591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-207352
	I0908 14:00:58.533552 3002591 ssh_runner.go:195] Run: cat /version.json
	I0908 14:00:58.533605 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:00:58.533836 3002591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:00:58.533922 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:00:58.569123 3002591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36778 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/calico-207352/id_rsa Username:docker}
	I0908 14:00:58.574587 3002591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36778 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/calico-207352/id_rsa Username:docker}
	I0908 14:00:58.799533 3002591 ssh_runner.go:195] Run: systemctl --version
	I0908 14:00:58.804275 3002591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 14:00:58.808408 3002591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 14:00:58.834822 3002591 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 14:00:58.834903 3002591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:00:58.882095 3002591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 14:00:58.882172 3002591 start.go:495] detecting cgroup driver to use...
	I0908 14:00:58.882219 3002591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 14:00:58.882305 3002591 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 14:00:58.896390 3002591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 14:00:58.909156 3002591 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:00:58.909294 3002591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:00:58.924478 3002591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:00:58.940281 3002591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:00:59.056329 3002591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:00:59.169043 3002591 docker.go:234] disabling docker service ...
	I0908 14:00:59.169165 3002591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:00:59.193542 3002591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:00:59.206463 3002591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:00:59.315177 3002591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:00:59.436298 3002591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:00:59.450003 3002591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:00:59.468395 3002591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 14:00:59.479022 3002591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 14:00:59.489492 3002591 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 14:00:59.489615 3002591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 14:00:59.500083 3002591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 14:00:59.510937 3002591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 14:00:59.521539 3002591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 14:00:59.531997 3002591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:00:59.541842 3002591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 14:00:59.552651 3002591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 14:00:59.562970 3002591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 14:00:59.579157 3002591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:00:59.588870 3002591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:00:59.598016 3002591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:00:59.710285 3002591 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 14:00:59.881339 3002591 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 14:00:59.881454 3002591 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 14:00:59.885954 3002591 start.go:563] Will wait 60s for crictl version
	I0908 14:00:59.886053 3002591 ssh_runner.go:195] Run: which crictl
	I0908 14:00:59.889974 3002591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:00:59.945955 3002591 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 14:00:59.946059 3002591 ssh_runner.go:195] Run: containerd --version
	I0908 14:00:59.978104 3002591 ssh_runner.go:195] Run: containerd --version
	I0908 14:01:00.013778 3002591 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 14:01:00.029555 3002591 cli_runner.go:164] Run: docker network inspect calico-207352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 14:01:00.084577 3002591 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0908 14:01:00.090560 3002591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:01:00.118890 3002591 kubeadm.go:875] updating cluster {Name:calico-207352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-207352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:01:00.119031 3002591 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 14:01:00.119117 3002591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:01:00.194978 3002591 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 14:01:00.195006 3002591 containerd.go:534] Images already preloaded, skipping extraction
	I0908 14:01:00.195110 3002591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:01:00.259433 3002591 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 14:01:00.259466 3002591 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:01:00.259477 3002591 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0908 14:01:00.259577 3002591 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-207352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-207352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0908 14:01:00.259655 3002591 ssh_runner.go:195] Run: sudo crictl info
	I0908 14:01:00.408213 3002591 cni.go:84] Creating CNI manager for "calico"
	I0908 14:01:00.408240 3002591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:01:00.408264 3002591 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-207352 NodeName:calico-207352 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:01:00.408382 3002591 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-207352"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:01:00.408483 3002591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 14:01:00.439171 3002591 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:01:00.439264 3002591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:01:00.469543 3002591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0908 14:01:00.497770 3002591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:01:00.532436 3002591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I0908 14:01:00.560980 3002591 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0908 14:01:00.566409 3002591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:01:00.580944 3002591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:01:00.704022 3002591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:01:00.723309 3002591 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352 for IP: 192.168.85.2
	I0908 14:01:00.723331 3002591 certs.go:194] generating shared ca certs ...
	I0908 14:01:00.723347 3002591 certs.go:226] acquiring lock for ca certs: {Name:mka64c3c41f67c038c6cf0d4d20f2375b7abe78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:00.723478 3002591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key
	I0908 14:01:00.723540 3002591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key
	I0908 14:01:00.723548 3002591 certs.go:256] generating profile certs ...
	I0908 14:01:00.723624 3002591 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/client.key
	I0908 14:01:00.723641 3002591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/client.crt with IP's: []
	I0908 14:01:02.429779 3002591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/client.crt ...
	I0908 14:01:02.429856 3002591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/client.crt: {Name:mkf26d4bc4a888fe57f42f317db5dbe42e6f4518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:02.430592 3002591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/client.key ...
	I0908 14:01:02.430647 3002591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/client.key: {Name:mk9e8415ec5d9f92282323d375c8ec5bf98451f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:02.430769 3002591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.key.ecd4e2a6
	I0908 14:01:02.430815 3002591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.crt.ecd4e2a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0908 14:01:02.580179 3002591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.crt.ecd4e2a6 ...
	I0908 14:01:02.580255 3002591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.crt.ecd4e2a6: {Name:mk998e927870cea6be889f848087471efade9a25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:02.582969 3002591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.key.ecd4e2a6 ...
	I0908 14:01:02.583042 3002591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.key.ecd4e2a6: {Name:mk638d59f01567320b7436f1c1a6543233d1a395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:02.583818 3002591 certs.go:381] copying /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.crt.ecd4e2a6 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.crt
	I0908 14:01:02.583948 3002591 certs.go:385] copying /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.key.ecd4e2a6 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.key
	I0908 14:01:02.584042 3002591 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/proxy-client.key
	I0908 14:01:02.584083 3002591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/proxy-client.crt with IP's: []
	I0908 14:01:02.878277 3002591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/proxy-client.crt ...
	I0908 14:01:02.878376 3002591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/proxy-client.crt: {Name:mk5372ee6f013c57c8ed5e87239b077eed381e24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:02.879147 3002591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/proxy-client.key ...
	I0908 14:01:02.879163 3002591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/proxy-client.key: {Name:mk34b6bf9a891c4d0c77f167e868a40640d862c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:02.879352 3002591 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem (1338 bytes)
	W0908 14:01:02.879394 3002591 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114_empty.pem, impossibly tiny 0 bytes
	I0908 14:01:02.879405 3002591 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 14:01:02.879429 3002591 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/ca.pem (1082 bytes)
	I0908 14:01:02.879451 3002591 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:01:02.879475 3002591 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/key.pem (1679 bytes)
	I0908 14:01:02.879517 3002591 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem (1708 bytes)
	I0908 14:01:02.880123 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:01:02.904758 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 14:01:02.945527 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:01:02.978880 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 14:01:03.010532 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 14:01:03.042043 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 14:01:03.068832 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:01:03.098804 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/calico-207352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 14:01:03.124748 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:01:03.150722 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/certs/2751114.pem --> /usr/share/ca-certificates/2751114.pem (1338 bytes)
	I0908 14:01:03.180409 3002591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/ssl/certs/27511142.pem --> /usr/share/ca-certificates/27511142.pem (1708 bytes)
	I0908 14:01:03.208644 3002591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:01:03.228585 3002591 ssh_runner.go:195] Run: openssl version
	I0908 14:01:03.234242 3002591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:01:03.244215 3002591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:01:03.247967 3002591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:01:03.248108 3002591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:01:03.255366 3002591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:01:03.265352 3002591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2751114.pem && ln -fs /usr/share/ca-certificates/2751114.pem /etc/ssl/certs/2751114.pem"
	I0908 14:01:03.275017 3002591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2751114.pem
	I0908 14:01:03.278708 3002591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/2751114.pem
	I0908 14:01:03.278813 3002591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2751114.pem
	I0908 14:01:03.286012 3002591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2751114.pem /etc/ssl/certs/51391683.0"
	I0908 14:01:03.295944 3002591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27511142.pem && ln -fs /usr/share/ca-certificates/27511142.pem /etc/ssl/certs/27511142.pem"
	I0908 14:01:03.305584 3002591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27511142.pem
	I0908 14:01:03.309408 3002591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/27511142.pem
	I0908 14:01:03.309517 3002591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27511142.pem
	I0908 14:01:03.316680 3002591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27511142.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:01:03.326846 3002591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:01:03.330990 3002591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 14:01:03.331093 3002591 kubeadm.go:392] StartCluster: {Name:calico-207352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-207352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:01:03.331230 3002591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 14:01:03.331325 3002591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:01:03.393668 3002591 cri.go:89] found id: ""
	I0908 14:01:03.393790 3002591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 14:01:03.404755 3002591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 14:01:03.414255 3002591 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 14:01:03.414405 3002591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 14:01:03.425947 3002591 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 14:01:03.426022 3002591 kubeadm.go:157] found existing configuration files:
	
	I0908 14:01:03.426105 3002591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 14:01:03.435850 3002591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 14:01:03.435953 3002591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 14:01:03.444918 3002591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 14:01:03.454741 3002591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 14:01:03.454858 3002591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 14:01:03.463836 3002591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 14:01:03.473534 3002591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 14:01:03.473644 3002591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 14:01:03.482514 3002591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 14:01:03.492137 3002591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 14:01:03.492247 3002591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 14:01:03.501172 3002591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 14:01:03.554687 3002591 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 14:01:03.554830 3002591 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 14:01:03.595340 3002591 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 14:01:03.595502 3002591 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0908 14:01:03.595569 3002591 kubeadm.go:310] OS: Linux
	I0908 14:01:03.595643 3002591 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 14:01:03.595726 3002591 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 14:01:03.595807 3002591 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 14:01:03.595888 3002591 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 14:01:03.595973 3002591 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 14:01:03.596058 3002591 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 14:01:03.596135 3002591 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 14:01:03.596217 3002591 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 14:01:03.596294 3002591 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 14:01:03.716283 3002591 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 14:01:03.716453 3002591 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 14:01:03.716573 3002591 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 14:01:03.723240 3002591 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 14:01:03.729301 3002591 out.go:252]   - Generating certificates and keys ...
	I0908 14:01:03.729469 3002591 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 14:01:03.729578 3002591 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 14:01:05.425129 3002591 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 14:01:05.640533 3002591 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 14:01:06.086873 3002591 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 14:01:07.037351 3002591 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 14:01:07.248946 3002591 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 14:01:07.249493 3002591 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-207352 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0908 14:01:07.616313 3002591 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 14:01:07.616862 3002591 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-207352 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0908 14:01:07.941774 3002591 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 14:01:08.750722 3002591 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 14:01:08.920238 3002591 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 14:01:08.920568 3002591 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 14:01:09.770828 3002591 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 14:01:10.522026 3002591 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 14:01:11.430712 3002591 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 14:01:11.718146 3002591 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 14:01:13.355217 3002591 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 14:01:13.356427 3002591 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 14:01:13.362099 3002591 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 14:01:13.365693 3002591 out.go:252]   - Booting up control plane ...
	I0908 14:01:13.365795 3002591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 14:01:13.365891 3002591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 14:01:13.365984 3002591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 14:01:13.378772 3002591 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 14:01:13.378897 3002591 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 14:01:13.383608 3002591 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 14:01:13.384922 3002591 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 14:01:13.384993 3002591 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 14:01:13.543869 3002591 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 14:01:13.543995 3002591 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 14:01:15.547060 3002591 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.003250216s
	I0908 14:01:15.558367 3002591 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 14:01:15.558471 3002591 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0908 14:01:15.558568 3002591 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 14:01:15.558654 3002591 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 14:01:22.101815 3002591 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 6.541491514s
	I0908 14:01:24.642926 3002591 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 9.084550406s
	I0908 14:01:25.062568 3002591 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 9.503121285s
	I0908 14:01:25.085890 3002591 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 14:01:25.100473 3002591 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 14:01:25.120808 3002591 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 14:01:25.121336 3002591 kubeadm.go:310] [mark-control-plane] Marking the node calico-207352 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 14:01:25.137549 3002591 kubeadm.go:310] [bootstrap-token] Using token: ol9gnb.op34qnzk6mlxc6nq
	I0908 14:01:25.140639 3002591 out.go:252]   - Configuring RBAC rules ...
	I0908 14:01:25.140773 3002591 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 14:01:25.145584 3002591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 14:01:25.156739 3002591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 14:01:25.161364 3002591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 14:01:25.165778 3002591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 14:01:25.170708 3002591 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 14:01:25.471984 3002591 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 14:01:25.905035 3002591 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 14:01:26.470036 3002591 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 14:01:26.471365 3002591 kubeadm.go:310] 
	I0908 14:01:26.471446 3002591 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 14:01:26.471465 3002591 kubeadm.go:310] 
	I0908 14:01:26.471543 3002591 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 14:01:26.471548 3002591 kubeadm.go:310] 
	I0908 14:01:26.471577 3002591 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 14:01:26.471651 3002591 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 14:01:26.471707 3002591 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 14:01:26.471718 3002591 kubeadm.go:310] 
	I0908 14:01:26.471776 3002591 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 14:01:26.471781 3002591 kubeadm.go:310] 
	I0908 14:01:26.471833 3002591 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 14:01:26.471841 3002591 kubeadm.go:310] 
	I0908 14:01:26.471897 3002591 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 14:01:26.471979 3002591 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 14:01:26.472062 3002591 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 14:01:26.472066 3002591 kubeadm.go:310] 
	I0908 14:01:26.472162 3002591 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 14:01:26.472256 3002591 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 14:01:26.472261 3002591 kubeadm.go:310] 
	I0908 14:01:26.472358 3002591 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ol9gnb.op34qnzk6mlxc6nq \
	I0908 14:01:26.472473 3002591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea53cc69e0a3bf1a14d944ab7c19a16da3435f4f2a06e216682a58ad77f6bb3 \
	I0908 14:01:26.472494 3002591 kubeadm.go:310] 	--control-plane 
	I0908 14:01:26.472499 3002591 kubeadm.go:310] 
	I0908 14:01:26.472584 3002591 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 14:01:26.472589 3002591 kubeadm.go:310] 
	I0908 14:01:26.472671 3002591 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ol9gnb.op34qnzk6mlxc6nq \
	I0908 14:01:26.472778 3002591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea53cc69e0a3bf1a14d944ab7c19a16da3435f4f2a06e216682a58ad77f6bb3 
	I0908 14:01:26.477476 3002591 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 14:01:26.477761 3002591 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0908 14:01:26.477883 3002591 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 14:01:26.477987 3002591 cni.go:84] Creating CNI manager for "calico"
	I0908 14:01:26.481296 3002591 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0908 14:01:26.484982 3002591 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 14:01:26.485008 3002591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0908 14:01:26.514304 3002591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 14:01:29.768079 3002591 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.253743008s)
	I0908 14:01:29.768121 3002591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 14:01:29.768252 3002591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:01:29.768330 3002591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-207352 minikube.k8s.io/updated_at=2025_09_08T14_01_29_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba minikube.k8s.io/name=calico-207352 minikube.k8s.io/primary=true
	I0908 14:01:30.071144 3002591 ops.go:34] apiserver oom_adj: -16
	I0908 14:01:30.071255 3002591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:01:30.571659 3002591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:01:31.071576 3002591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:01:31.571542 3002591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:01:31.697276 3002591 kubeadm.go:1105] duration metric: took 1.92907009s to wait for elevateKubeSystemPrivileges
	I0908 14:01:31.697311 3002591 kubeadm.go:394] duration metric: took 28.366222878s to StartCluster
	I0908 14:01:31.697329 3002591 settings.go:142] acquiring lock: {Name:mk4a46c455122873706b4d72c01ce6416a89153c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:31.697406 3002591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 14:01:31.698435 3002591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-2749258/kubeconfig: {Name:mka527495d16a3d35c90627136063e6207a6b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:31.698645 3002591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 14:01:31.698656 3002591 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 14:01:31.698926 3002591 config.go:182] Loaded profile config "calico-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:01:31.698962 3002591 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 14:01:31.699017 3002591 addons.go:69] Setting storage-provisioner=true in profile "calico-207352"
	I0908 14:01:31.699030 3002591 addons.go:238] Setting addon storage-provisioner=true in "calico-207352"
	I0908 14:01:31.699063 3002591 host.go:66] Checking if "calico-207352" exists ...
	I0908 14:01:31.699536 3002591 cli_runner.go:164] Run: docker container inspect calico-207352 --format={{.State.Status}}
	I0908 14:01:31.699887 3002591 addons.go:69] Setting default-storageclass=true in profile "calico-207352"
	I0908 14:01:31.699906 3002591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-207352"
	I0908 14:01:31.700173 3002591 cli_runner.go:164] Run: docker container inspect calico-207352 --format={{.State.Status}}
	I0908 14:01:31.703428 3002591 out.go:179] * Verifying Kubernetes components...
	I0908 14:01:31.706892 3002591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:01:31.732446 3002591 addons.go:238] Setting addon default-storageclass=true in "calico-207352"
	I0908 14:01:31.732490 3002591 host.go:66] Checking if "calico-207352" exists ...
	I0908 14:01:31.732906 3002591 cli_runner.go:164] Run: docker container inspect calico-207352 --format={{.State.Status}}
	I0908 14:01:31.753686 3002591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:01:31.756579 3002591 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:01:31.756606 3002591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 14:01:31.756678 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:01:31.786973 3002591 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 14:01:31.786992 3002591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 14:01:31.787052 3002591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-207352
	I0908 14:01:31.827666 3002591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36778 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/calico-207352/id_rsa Username:docker}
	I0908 14:01:31.830047 3002591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36778 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/calico-207352/id_rsa Username:docker}
	I0908 14:01:32.097795 3002591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 14:01:32.182682 3002591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:01:32.182880 3002591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 14:01:32.258086 3002591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:01:32.994777 3002591 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0908 14:01:32.996583 3002591 node_ready.go:35] waiting up to 15m0s for node "calico-207352" to be "Ready" ...
	I0908 14:01:32.999740 3002591 node_ready.go:49] node "calico-207352" is "Ready"
	I0908 14:01:32.999828 3002591 node_ready.go:38] duration metric: took 3.222431ms for node "calico-207352" to be "Ready" ...
	I0908 14:01:32.999857 3002591 api_server.go:52] waiting for apiserver process to appear ...
	I0908 14:01:32.999948 3002591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:01:33.245398 3002591 api_server.go:72] duration metric: took 1.546716102s to wait for apiserver process to appear ...
	I0908 14:01:33.245481 3002591 api_server.go:88] waiting for apiserver healthz status ...
	I0908 14:01:33.245515 3002591 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0908 14:01:33.248928 3002591 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0908 14:01:33.253926 3002591 addons.go:514] duration metric: took 1.554952232s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0908 14:01:33.266832 3002591 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0908 14:01:33.267896 3002591 api_server.go:141] control plane version: v1.34.0
	I0908 14:01:33.267920 3002591 api_server.go:131] duration metric: took 22.417828ms to wait for apiserver health ...
	I0908 14:01:33.267929 3002591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 14:01:33.276188 3002591 system_pods.go:59] 10 kube-system pods found
	I0908 14:01:33.276300 3002591 system_pods.go:61] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:33.276343 3002591 system_pods.go:61] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:33.276370 3002591 system_pods.go:61] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:33.276391 3002591 system_pods.go:61] "coredns-66bc5c9577-r75mq" [f7916186-31de-4975-b1fd-b92a51acf314] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:33.276424 3002591 system_pods.go:61] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:01:33.276447 3002591 system_pods.go:61] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:33.276467 3002591 system_pods.go:61] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:33.276484 3002591 system_pods.go:61] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:33.276514 3002591 system_pods.go:61] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:33.276535 3002591 system_pods.go:61] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Pending
	I0908 14:01:33.276554 3002591 system_pods.go:74] duration metric: took 8.619347ms to wait for pod list to return data ...
	I0908 14:01:33.276573 3002591 default_sa.go:34] waiting for default service account to be created ...
	I0908 14:01:33.282489 3002591 default_sa.go:45] found service account: "default"
	I0908 14:01:33.282564 3002591 default_sa.go:55] duration metric: took 5.960289ms for default service account to be created ...
	I0908 14:01:33.282587 3002591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 14:01:33.289032 3002591 system_pods.go:86] 10 kube-system pods found
	I0908 14:01:33.289111 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:33.289137 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:33.289160 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:33.289200 3002591 system_pods.go:89] "coredns-66bc5c9577-r75mq" [f7916186-31de-4975-b1fd-b92a51acf314] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:33.289218 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:01:33.289237 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:33.289275 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:33.289297 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:33.289314 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:33.289335 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:01:33.289396 3002591 retry.go:31] will retry after 236.790454ms: missing components: kube-dns
	I0908 14:01:33.499250 3002591 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-207352" context rescaled to 1 replicas
	I0908 14:01:33.529948 3002591 system_pods.go:86] 10 kube-system pods found
	I0908 14:01:33.529981 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:33.529996 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:33.530005 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:33.530011 3002591 system_pods.go:89] "coredns-66bc5c9577-r75mq" [f7916186-31de-4975-b1fd-b92a51acf314] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:33.530017 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:01:33.530028 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:33.530033 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:33.530040 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:33.530044 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:33.530055 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:01:33.530073 3002591 retry.go:31] will retry after 291.162362ms: missing components: kube-dns
	I0908 14:01:33.825928 3002591 system_pods.go:86] 10 kube-system pods found
	I0908 14:01:33.825966 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:33.825975 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:33.825982 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:33.825990 3002591 system_pods.go:89] "coredns-66bc5c9577-r75mq" [f7916186-31de-4975-b1fd-b92a51acf314] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:33.825996 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:01:33.826001 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:33.826006 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:33.826018 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:33.826022 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:33.826036 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 14:01:33.826050 3002591 retry.go:31] will retry after 371.091387ms: missing components: kube-dns
	I0908 14:01:34.201736 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:34.201774 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:34.201786 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:34.201794 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:34.201799 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:34.201804 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:34.201808 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:34.201813 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:34.201817 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:34.201821 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:34.201835 3002591 retry.go:31] will retry after 562.688806ms: missing components: kube-dns
	I0908 14:01:34.769236 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:34.769277 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:34.769287 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:34.769294 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:34.769300 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:34.769306 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:34.769310 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:34.769314 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:34.769318 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:34.769322 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:34.769335 3002591 retry.go:31] will retry after 717.506177ms: missing components: kube-dns
	I0908 14:01:35.490570 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:35.490607 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:35.490618 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:35.490625 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:35.490630 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:35.490636 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:35.490641 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:35.490646 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:35.490649 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:35.490653 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:35.490667 3002591 retry.go:31] will retry after 856.928294ms: missing components: kube-dns
	I0908 14:01:36.351703 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:36.351742 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:36.351754 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:36.351761 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:36.351765 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:36.351771 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:36.351775 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:36.351780 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:36.351784 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:36.351788 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:36.351801 3002591 retry.go:31] will retry after 997.741627ms: missing components: kube-dns
	I0908 14:01:37.353315 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:37.353350 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:37.353363 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:37.353370 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:37.353375 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:37.353382 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:37.353386 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:37.353390 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:37.353395 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:37.353400 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:37.353414 3002591 retry.go:31] will retry after 927.750534ms: missing components: kube-dns
	I0908 14:01:38.285836 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:38.285877 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:38.285888 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:38.285902 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:38.285907 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:38.285913 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:38.285917 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:38.285921 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:38.285926 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:38.285930 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:38.285949 3002591 retry.go:31] will retry after 1.281591475s: missing components: kube-dns
	I0908 14:01:39.571142 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:39.571181 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:39.571193 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:39.571201 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:39.571206 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:39.571212 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:39.571216 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:39.571220 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:39.571224 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:39.571228 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:39.571241 3002591 retry.go:31] will retry after 2.144755356s: missing components: kube-dns
	I0908 14:01:41.720678 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:41.720714 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:41.720724 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:41.720736 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:41.720742 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:41.720748 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:41.720753 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:41.720759 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:41.720763 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:41.720767 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:41.720786 3002591 retry.go:31] will retry after 2.021963183s: missing components: kube-dns
	I0908 14:01:43.747447 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:43.747482 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:43.747525 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:43.747533 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:43.747543 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:43.747549 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:43.747553 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:43.747557 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:43.747561 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:43.747573 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:43.747586 3002591 retry.go:31] will retry after 2.546827312s: missing components: kube-dns
	I0908 14:01:46.311575 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:46.311611 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:46.311620 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:46.311629 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:46.311633 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:46.311638 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:46.311642 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:46.311648 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:46.311652 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:46.311655 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:46.311668 3002591 retry.go:31] will retry after 3.230199216s: missing components: kube-dns
	I0908 14:01:49.546667 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:49.546703 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:49.546713 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:49.546727 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:49.546732 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:49.546738 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:49.546747 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:49.546763 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:49.546770 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:49.546774 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:49.546788 3002591 retry.go:31] will retry after 3.724881097s: missing components: kube-dns
	I0908 14:01:53.275551 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:53.275588 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:53.275599 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:53.275606 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:53.275611 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:53.275617 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:53.275621 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:53.275626 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:53.275631 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:53.275637 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:53.275651 3002591 retry.go:31] will retry after 5.987890201s: missing components: kube-dns
	I0908 14:01:59.269238 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:01:59.269275 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:01:59.269285 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:01:59.269292 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:01:59.269296 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:01:59.269302 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:01:59.269306 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:01:59.269310 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:01:59.269314 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:01:59.269318 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:01:59.269332 3002591 retry.go:31] will retry after 7.025292108s: missing components: kube-dns
	I0908 14:02:06.301144 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:02:06.301183 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:02:06.301195 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:02:06.301204 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:02:06.301209 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:02:06.301215 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:02:06.301219 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:02:06.301235 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:02:06.301245 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:02:06.301249 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:02:06.301264 3002591 retry.go:31] will retry after 10.353883587s: missing components: kube-dns
	I0908 14:02:16.659200 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:02:16.659235 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:02:16.659246 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:02:16.659255 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:02:16.659260 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:02:16.659265 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:02:16.659269 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:02:16.659273 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:02:16.659277 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:02:16.659280 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:02:16.659293 3002591 retry.go:31] will retry after 13.825705782s: missing components: kube-dns
	I0908 14:02:30.491802 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:02:30.491833 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:02:30.491843 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:02:30.491850 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:02:30.491854 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:02:30.491859 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:02:30.491865 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:02:30.491869 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:02:30.491873 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:02:30.491876 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:02:30.491888 3002591 retry.go:31] will retry after 13.341334133s: missing components: kube-dns
	I0908 14:02:43.839354 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:02:43.839385 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:02:43.839407 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:02:43.839417 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:02:43.839422 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:02:43.839427 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:02:43.839431 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:02:43.839434 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:02:43.839438 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:02:43.839442 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:02:43.839455 3002591 retry.go:31] will retry after 19.023596606s: missing components: kube-dns
	I0908 14:03:02.867940 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:03:02.867973 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:03:02.867983 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:03:02.867991 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:03:02.867995 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:03:02.868000 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:03:02.868005 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:03:02.868009 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:03:02.868013 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:03:02.868016 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:03:02.868030 3002591 retry.go:31] will retry after 23.768295872s: missing components: kube-dns
	I0908 14:03:26.641162 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:03:26.641195 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:03:26.641211 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:03:26.641218 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:03:26.641222 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:03:26.641227 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:03:26.641230 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:03:26.641234 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:03:26.641238 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:03:26.641241 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:03:26.641254 3002591 retry.go:31] will retry after 24.466895161s: missing components: kube-dns
	I0908 14:03:51.113620 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:03:51.113655 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:03:51.113667 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:03:51.113675 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:03:51.113679 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:03:51.113685 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:03:51.113689 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:03:51.113694 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:03:51.113699 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:03:51.113704 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:03:51.113719 3002591 retry.go:31] will retry after 37.831394408s: missing components: kube-dns
	I0908 14:04:28.951948 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:04:28.951979 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:04:28.951989 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:04:28.951997 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:04:28.952001 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:04:28.952006 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:04:28.952011 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:04:28.952014 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:04:28.952018 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:04:28.952021 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:04:28.952036 3002591 retry.go:31] will retry after 40.531275608s: missing components: kube-dns
	I0908 14:05:09.489325 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:05:09.489361 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:05:09.489373 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:05:09.489382 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:05:09.489386 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:05:09.489393 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:05:09.489397 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:05:09.489401 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:05:09.489405 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:05:09.489409 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:05:09.489423 3002591 retry.go:31] will retry after 56.119521178s: missing components: kube-dns
	I0908 14:06:05.614174 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:06:05.614209 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:06:05.614220 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:06:05.614229 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:06:05.614234 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:06:05.614240 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:06:05.614245 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:06:05.614249 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:06:05.614253 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:06:05.614258 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:06:05.614271 3002591 retry.go:31] will retry after 52.338205523s: missing components: kube-dns
	I0908 14:06:57.956703 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:06:57.956737 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:06:57.956747 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:06:57.956755 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:06:57.956760 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:06:57.956765 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:06:57.956774 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:06:57.956779 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:06:57.956783 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:06:57.956790 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:06:57.956804 3002591 retry.go:31] will retry after 50.680677497s: missing components: kube-dns
	I0908 14:07:48.641205 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:07:48.641246 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:07:48.641271 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:07:48.641279 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:07:48.641284 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:07:48.641290 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:07:48.641295 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:07:48.641299 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:07:48.641305 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:07:48.641317 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:07:48.641342 3002591 retry.go:31] will retry after 52.869685793s: missing components: kube-dns
	I0908 14:08:41.514891 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:08:41.514935 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:08:41.514948 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:08:41.514957 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:08:41.514961 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:08:41.514966 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:08:41.514970 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:08:41.514975 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:08:41.514980 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:08:41.514993 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:08:41.515007 3002591 retry.go:31] will retry after 46.803109862s: missing components: kube-dns
	I0908 14:09:28.325164 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:09:28.325197 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:09:28.325207 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:09:28.325216 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:09:28.325221 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:09:28.325227 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:09:28.325231 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:09:28.325235 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:09:28.325239 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:09:28.325243 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:09:28.325255 3002591 retry.go:31] will retry after 54.638864144s: missing components: kube-dns
	I0908 14:10:22.968685 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:10:22.968723 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:10:22.968735 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:10:22.968744 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:10:22.968749 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:10:22.968755 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:10:22.968760 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:10:22.968809 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:10:22.968819 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:10:22.968825 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:10:22.968838 3002591 retry.go:31] will retry after 1m0.825962135s: missing components: kube-dns
	I0908 14:11:23.799266 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:11:23.799307 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:11:23.799319 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:11:23.799328 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:11:23.799334 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:11:23.799339 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:11:23.799343 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:11:23.799348 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:11:23.799352 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:11:23.799357 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:11:23.799371 3002591 retry.go:31] will retry after 47.222756217s: missing components: kube-dns
	I0908 14:12:11.025784 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:12:11.025819 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:12:11.025831 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:12:11.025839 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:12:11.025844 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:12:11.025850 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:12:11.025855 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:12:11.025863 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:12:11.025867 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:12:11.025872 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:12:11.025888 3002591 retry.go:31] will retry after 1m7.688801697s: missing components: kube-dns
	I0908 14:13:18.718443 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:13:18.718477 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:13:18.718487 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:13:18.718494 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:13:18.718500 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:13:18.718505 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:13:18.718510 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:13:18.718515 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:13:18.718519 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:13:18.718529 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:13:18.718543 3002591 retry.go:31] will retry after 1m6.695358007s: missing components: kube-dns
	I0908 14:14:25.418273 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:14:25.418313 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:14:25.418326 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:14:25.418334 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:14:25.418361 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:14:25.418367 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:14:25.418371 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:14:25.418377 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:14:25.418381 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:14:25.418387 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:14:25.418406 3002591 retry.go:31] will retry after 1m2.396646246s: missing components: kube-dns
	I0908 14:15:27.819990 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:15:27.820029 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:15:27.820039 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:15:27.820047 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:15:27.820051 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:15:27.820057 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:15:27.820062 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:15:27.820067 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:15:27.820071 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:15:27.820075 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:15:27.820088 3002591 retry.go:31] will retry after 48.021064202s: missing components: kube-dns
	I0908 14:16:15.845703 3002591 system_pods.go:86] 9 kube-system pods found
	I0908 14:16:15.845748 3002591 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zfm5z" [a5c2ec71-dbc4-4e42-8522-aa3b75719c24] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0908 14:16:15.845761 3002591 system_pods.go:89] "calico-node-q6vzp" [41fe1edb-bc1b-4674-a730-75d0dabd7430] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0908 14:16:15.845784 3002591 system_pods.go:89] "coredns-66bc5c9577-8w6rq" [17777234-1033-43f6-abca-63f0e2785682] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 14:16:15.845794 3002591 system_pods.go:89] "etcd-calico-207352" [56495afa-97d6-4e39-af55-2ed33c842826] Running
	I0908 14:16:15.845799 3002591 system_pods.go:89] "kube-apiserver-calico-207352" [ade5ebf4-2b13-4eea-a44a-9d5099c358d9] Running
	I0908 14:16:15.845803 3002591 system_pods.go:89] "kube-controller-manager-calico-207352" [b5fd51ba-0176-4200-b9e3-dce85f1e7261] Running
	I0908 14:16:15.845808 3002591 system_pods.go:89] "kube-proxy-mxmpn" [ae282d3a-d547-49a2-9471-e1985451198c] Running
	I0908 14:16:15.845812 3002591 system_pods.go:89] "kube-scheduler-calico-207352" [aeaca404-ce21-453e-8d90-ac3530c7771d] Running
	I0908 14:16:15.845818 3002591 system_pods.go:89] "storage-provisioner" [390a426c-cd04-4bd5-b843-b5b73bb844cf] Running
	I0908 14:16:15.849157 3002591 out.go:203] 
	W0908 14:16:15.852050 3002591 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0908 14:16:15.852076 3002591 out.go:285] * 
	* 
	W0908 14:16:15.854207 3002591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0908 14:16:15.857055 3002591 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (930.22s)

                                                
                                    

Test pass (287/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.3
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 5.89
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.1
18 TestDownloadOnly/v1.34.0/DeleteAll 0.22
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 209.83
29 TestAddons/serial/Volcano 40.31
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 8.83
35 TestAddons/parallel/Registry 17.34
36 TestAddons/parallel/RegistryCreds 0.75
37 TestAddons/parallel/Ingress 20.74
38 TestAddons/parallel/InspektorGadget 6.33
39 TestAddons/parallel/MetricsServer 5.81
41 TestAddons/parallel/CSI 59.15
42 TestAddons/parallel/Headlamp 22.81
43 TestAddons/parallel/CloudSpanner 5.58
44 TestAddons/parallel/LocalPath 51.47
45 TestAddons/parallel/NvidiaDevicePlugin 5.58
46 TestAddons/parallel/Yakd 11.79
48 TestAddons/StoppedEnableDisable 12.26
49 TestCertOptions 36.44
50 TestCertExpiration 226.1
52 TestForceSystemdFlag 37.12
53 TestForceSystemdEnv 35.37
54 TestDockerEnvContainerd 46.75
59 TestErrorSpam/setup 31.42
60 TestErrorSpam/start 0.74
61 TestErrorSpam/status 1.38
62 TestErrorSpam/pause 1.83
63 TestErrorSpam/unpause 1.85
64 TestErrorSpam/stop 1.47
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 85.33
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.64
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.69
76 TestFunctional/serial/CacheCmd/cache/add_local 1.24
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 45.17
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.8
87 TestFunctional/serial/LogsFileCmd 1.76
88 TestFunctional/serial/InvalidService 4.54
90 TestFunctional/parallel/ConfigCmd 0.51
91 TestFunctional/parallel/DashboardCmd 9.96
92 TestFunctional/parallel/DryRun 0.6
93 TestFunctional/parallel/InternationalLanguage 0.25
94 TestFunctional/parallel/StatusCmd 1.02
98 TestFunctional/parallel/ServiceCmdConnect 8.6
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 26.4
102 TestFunctional/parallel/SSHCmd 0.53
103 TestFunctional/parallel/CpCmd 1.95
105 TestFunctional/parallel/FileSync 0.37
106 TestFunctional/parallel/CertSync 2.27
110 TestFunctional/parallel/NodeLabels 0.13
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
114 TestFunctional/parallel/License 0.38
115 TestFunctional/parallel/Version/short 0.08
116 TestFunctional/parallel/Version/components 1.49
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
121 TestFunctional/parallel/ImageCommands/ImageBuild 5.35
122 TestFunctional/parallel/ImageCommands/Setup 0.77
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.48
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.51
128 TestFunctional/parallel/ServiceCmd/DeployApp 8.25
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.33
139 TestFunctional/parallel/ServiceCmd/List 0.35
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
142 TestFunctional/parallel/ServiceCmd/Format 0.35
143 TestFunctional/parallel/ServiceCmd/URL 0.36
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
151 TestFunctional/parallel/ProfileCmd/profile_list 0.47
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
153 TestFunctional/parallel/MountCmd/any-port 8.09
154 TestFunctional/parallel/MountCmd/specific-port 1.75
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.38
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 134.68
164 TestMultiControlPlane/serial/DeployApp 43.56
165 TestMultiControlPlane/serial/PingHostFromPods 1.56
166 TestMultiControlPlane/serial/AddWorkerNode 16.39
167 TestMultiControlPlane/serial/NodeLabels 0.2
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.16
169 TestMultiControlPlane/serial/CopyFile 19.75
170 TestMultiControlPlane/serial/StopSecondaryNode 12.78
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
172 TestMultiControlPlane/serial/RestartSecondaryNode 13.95
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.3
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 109.71
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.55
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
177 TestMultiControlPlane/serial/StopCluster 35.97
178 TestMultiControlPlane/serial/RestartCluster 60.63
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
180 TestMultiControlPlane/serial/AddSecondaryNode 37.71
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.7
185 TestJSONOutput/start/Command 91.17
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.73
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.65
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.85
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 41.22
211 TestKicCustomNetwork/use_default_bridge_network 34.76
212 TestKicExistingNetwork 32.34
213 TestKicCustomSubnet 36.17
214 TestKicStaticIP 36.35
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 68.21
219 TestMountStart/serial/StartWithMountFirst 6.72
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 6.65
222 TestMountStart/serial/VerifyMountSecond 0.25
223 TestMountStart/serial/DeleteFirst 1.62
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.2
226 TestMountStart/serial/RestartStopped 7.96
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 69.9
231 TestMultiNode/serial/DeployApp2Nodes 18.23
232 TestMultiNode/serial/PingHostFrom2Pods 1.06
233 TestMultiNode/serial/AddNode 14.64
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.67
236 TestMultiNode/serial/CopyFile 10.1
237 TestMultiNode/serial/StopNode 2.29
238 TestMultiNode/serial/StartAfterStop 8.06
239 TestMultiNode/serial/RestartKeepsNodes 78.71
240 TestMultiNode/serial/DeleteNode 5.47
241 TestMultiNode/serial/StopMultiNode 23.94
242 TestMultiNode/serial/RestartMultiNode 53.5
243 TestMultiNode/serial/ValidateNameConflict 32.23
248 TestPreload 141.01
250 TestScheduledStopUnix 107.59
253 TestInsufficientStorage 10.51
254 TestRunningBinaryUpgrade 67.53
256 TestKubernetesUpgrade 176.8
257 TestMissingContainerUpgrade 150.07
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 43.41
261 TestNoKubernetes/serial/StartWithStopK8s 24.06
262 TestNoKubernetes/serial/Start 5.75
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
264 TestNoKubernetes/serial/ProfileList 0.68
265 TestNoKubernetes/serial/Stop 1.21
266 TestNoKubernetes/serial/StartNoArgs 6.84
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
268 TestStoppedBinaryUpgrade/Setup 0.71
269 TestStoppedBinaryUpgrade/Upgrade 58.96
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.46
279 TestPause/serial/Start 116.48
287 TestNetworkPlugins/group/false 3.8
291 TestPause/serial/SecondStartNoReconfiguration 6.64
292 TestPause/serial/Pause 0.93
293 TestPause/serial/VerifyStatus 0.39
294 TestPause/serial/Unpause 0.91
295 TestPause/serial/PauseAgain 1
296 TestPause/serial/DeletePaused 3.1
297 TestPause/serial/VerifyDeletedResources 6.13
299 TestStartStop/group/old-k8s-version/serial/FirstStart 71.78
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.5
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
302 TestStartStop/group/old-k8s-version/serial/Stop 12.18
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
304 TestStartStop/group/old-k8s-version/serial/SecondStart 48.87
307 TestStartStop/group/no-preload/serial/FirstStart 71.08
308 TestStartStop/group/no-preload/serial/DeployApp 9.35
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
310 TestStartStop/group/no-preload/serial/Stop 12.03
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/no-preload/serial/SecondStart 53.43
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
317 TestStartStop/group/old-k8s-version/serial/Pause 3.38
319 TestStartStop/group/embed-certs/serial/FirstStart 56.16
320 TestStartStop/group/embed-certs/serial/DeployApp 10.34
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
322 TestStartStop/group/embed-certs/serial/Stop 12.04
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/embed-certs/serial/SecondStart 53.22
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
327 TestStartStop/group/no-preload/serial/Pause 3.14
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 93.44
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.38
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.98
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.99
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
339 TestStartStop/group/embed-certs/serial/Pause 3.02
341 TestStartStop/group/newest-cni/serial/FirstStart 35.84
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
344 TestStartStop/group/newest-cni/serial/Stop 1.23
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
346 TestStartStop/group/newest-cni/serial/SecondStart 16.02
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
350 TestStartStop/group/newest-cni/serial/Pause 3.06
351 TestNetworkPlugins/group/auto/Start 98.92
352 TestNetworkPlugins/group/auto/KubeletFlags 0.33
353 TestNetworkPlugins/group/auto/NetCatPod 10.31
354 TestNetworkPlugins/group/auto/DNS 0.18
355 TestNetworkPlugins/group/auto/Localhost 0.14
356 TestNetworkPlugins/group/auto/HairPin 0.16
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.16
359 TestNetworkPlugins/group/kindnet/Start 98.88
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
363 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
364 TestNetworkPlugins/group/kindnet/DNS 0.19
365 TestNetworkPlugins/group/kindnet/Localhost 0.18
366 TestNetworkPlugins/group/kindnet/HairPin 0.16
367 TestNetworkPlugins/group/custom-flannel/Start 50.37
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.32
370 TestNetworkPlugins/group/custom-flannel/DNS 0.18
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
373 TestNetworkPlugins/group/enable-default-cni/Start 75.04
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
379 TestNetworkPlugins/group/flannel/Start 142.53
380 TestNetworkPlugins/group/flannel/ControllerPod 6
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
382 TestNetworkPlugins/group/flannel/NetCatPod 10.26
383 TestNetworkPlugins/group/flannel/DNS 0.25
384 TestNetworkPlugins/group/flannel/Localhost 0.16
385 TestNetworkPlugins/group/flannel/HairPin 0.16
386 TestNetworkPlugins/group/bridge/Start 78.09
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
388 TestNetworkPlugins/group/bridge/NetCatPod 9.3
389 TestNetworkPlugins/group/bridge/DNS 0.18
390 TestNetworkPlugins/group/bridge/Localhost 0.22
391 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (6.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-848218 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-848218 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.30171267s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 12:32:56.015955 2751114 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0908 12:32:56.016037 2751114 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-848218
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-848218: exit status 85 (92.894345ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-848218 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-848218 │ jenkins │ v1.36.0 │ 08 Sep 25 12:32 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:32:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:32:49.762835 2751119 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:32:49.762953 2751119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:32:49.762964 2751119 out.go:374] Setting ErrFile to fd 2...
	I0908 12:32:49.762968 2751119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:32:49.763231 2751119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	W0908 12:32:49.763391 2751119 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21508-2749258/.minikube/config/config.json: open /home/jenkins/minikube-integration/21508-2749258/.minikube/config/config.json: no such file or directory
	I0908 12:32:49.763788 2751119 out.go:368] Setting JSON to true
	I0908 12:32:49.764642 2751119 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65722,"bootTime":1757269048,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 12:32:49.764713 2751119 start.go:140] virtualization:  
	I0908 12:32:49.768761 2751119 out.go:99] [download-only-848218] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0908 12:32:49.769012 2751119 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 12:32:49.769078 2751119 notify.go:220] Checking for updates...
	I0908 12:32:49.772035 2751119 out.go:171] MINIKUBE_LOCATION=21508
	I0908 12:32:49.775089 2751119 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:32:49.778099 2751119 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 12:32:49.781171 2751119 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 12:32:49.784089 2751119 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 12:32:49.790028 2751119 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 12:32:49.790308 2751119 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:32:49.820368 2751119 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:32:49.820489 2751119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:32:49.876300 2751119 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 12:32:49.867139815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:32:49.876408 2751119 docker.go:318] overlay module found
	I0908 12:32:49.879461 2751119 out.go:99] Using the docker driver based on user configuration
	I0908 12:32:49.879504 2751119 start.go:304] selected driver: docker
	I0908 12:32:49.879517 2751119 start.go:918] validating driver "docker" against <nil>
	I0908 12:32:49.879628 2751119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:32:49.945138 2751119 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 12:32:49.935999884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:32:49.945294 2751119 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:32:49.945627 2751119 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 12:32:49.945783 2751119 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 12:32:49.948812 2751119 out.go:171] Using Docker driver with root privileges
	I0908 12:32:49.951794 2751119 cni.go:84] Creating CNI manager for ""
	I0908 12:32:49.951875 2751119 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 12:32:49.951892 2751119 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 12:32:49.951967 2751119 start.go:348] cluster config:
	{Name:download-only-848218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-848218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:32:49.954913 2751119 out.go:99] Starting "download-only-848218" primary control-plane node in "download-only-848218" cluster
	I0908 12:32:49.954943 2751119 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 12:32:49.957822 2751119 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:32:49.957851 2751119 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0908 12:32:49.957977 2751119 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:32:49.974161 2751119 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 12:32:49.974444 2751119 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 12:32:49.974559 2751119 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 12:32:50.017294 2751119 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0908 12:32:50.017331 2751119 cache.go:58] Caching tarball of preloaded images
	I0908 12:32:50.018199 2751119 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0908 12:32:50.021534 2751119 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 12:32:50.021568 2751119 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 ...
	I0908 12:32:50.102250 2751119 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0908 12:32:54.391508 2751119 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 ...
	I0908 12:32:54.391689 2751119 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-848218 host does not exist
	  To start a cluster, run: "minikube start -p download-only-848218"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-848218
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-479155 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-479155 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.891882225s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 12:33:02.349678 2751114 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0908 12:33:02.349715 2751114 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-479155
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-479155: exit status 85 (94.998234ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-848218 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-848218 │ jenkins │ v1.36.0 │ 08 Sep 25 12:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 12:32 UTC │ 08 Sep 25 12:32 UTC │
	│ delete  │ -p download-only-848218                                                                                                                                                               │ download-only-848218 │ jenkins │ v1.36.0 │ 08 Sep 25 12:32 UTC │ 08 Sep 25 12:32 UTC │
	│ start   │ -o=json --download-only -p download-only-479155 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-479155 │ jenkins │ v1.36.0 │ 08 Sep 25 12:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:32:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:32:56.499486 2751322 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:32:56.499616 2751322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:32:56.499627 2751322 out.go:374] Setting ErrFile to fd 2...
	I0908 12:32:56.499632 2751322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:32:56.499869 2751322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 12:32:56.500265 2751322 out.go:368] Setting JSON to true
	I0908 12:32:56.501106 2751322 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65728,"bootTime":1757269048,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 12:32:56.501172 2751322 start.go:140] virtualization:  
	I0908 12:32:56.504598 2751322 out.go:99] [download-only-479155] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:32:56.504817 2751322 notify.go:220] Checking for updates...
	I0908 12:32:56.507684 2751322 out.go:171] MINIKUBE_LOCATION=21508
	I0908 12:32:56.511002 2751322 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:32:56.514032 2751322 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 12:32:56.516862 2751322 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 12:32:56.519805 2751322 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 12:32:56.525393 2751322 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 12:32:56.525663 2751322 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:32:56.557568 2751322 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:32:56.557680 2751322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:32:56.612824 2751322 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 12:32:56.603653319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:32:56.612923 2751322 docker.go:318] overlay module found
	I0908 12:32:56.615850 2751322 out.go:99] Using the docker driver based on user configuration
	I0908 12:32:56.615876 2751322 start.go:304] selected driver: docker
	I0908 12:32:56.615883 2751322 start.go:918] validating driver "docker" against <nil>
	I0908 12:32:56.615974 2751322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:32:56.682598 2751322 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 12:32:56.67328706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:32:56.682770 2751322 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:32:56.683063 2751322 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 12:32:56.683214 2751322 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 12:32:56.686277 2751322 out.go:171] Using Docker driver with root privileges
	I0908 12:32:56.689124 2751322 cni.go:84] Creating CNI manager for ""
	I0908 12:32:56.689201 2751322 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 12:32:56.689214 2751322 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 12:32:56.689286 2751322 start.go:348] cluster config:
	{Name:download-only-479155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-479155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:32:56.692268 2751322 out.go:99] Starting "download-only-479155" primary control-plane node in "download-only-479155" cluster
	I0908 12:32:56.692296 2751322 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 12:32:56.695124 2751322 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:32:56.695154 2751322 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 12:32:56.695221 2751322 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:32:56.710732 2751322 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 12:32:56.710849 2751322 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 12:32:56.710871 2751322 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 12:32:56.710878 2751322 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 12:32:56.710886 2751322 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 12:32:56.760492 2751322 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0908 12:32:56.760532 2751322 cache.go:58] Caching tarball of preloaded images
	I0908 12:32:56.761361 2751322 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 12:32:56.764402 2751322 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 12:32:56.764430 2751322 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 ...
	I0908 12:32:56.855374 2751322 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:08b8266a02e141b302c5f305615e1018 -> /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0908 12:33:00.774996 2751322 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 ...
	I0908 12:33:00.775124 2751322 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-2749258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-479155 host does not exist
	  To start a cluster, run: "minikube start -p download-only-479155"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-479155
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 12:33:03.650027 2751114 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-954690 --alsologtostderr --binary-mirror http://127.0.0.1:42535 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-954690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-954690
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-488661
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-488661: exit status 85 (74.821978ms)

                                                
                                                
-- stdout --
	* Profile "addons-488661" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-488661"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-488661
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-488661: exit status 85 (66.716137ms)

                                                
                                                
-- stdout --
	* Profile "addons-488661" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-488661"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (209.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-488661 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-488661 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m29.832019192s)
--- PASS: TestAddons/Setup (209.83s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 63.217859ms
addons_test.go:884: volcano-controller stabilized in 63.276606ms
addons_test.go:868: volcano-scheduler stabilized in 63.304249ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-7knpb" [10076338-ebb2-4e0a-9cee-111d328770f4] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003202363s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-b6q9n" [2be7b7be-46c9-4cb3-99d4-04bdbfadb4f1] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.007106738s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-x2txn" [f95ae6b0-2d75-4a02-a3ef-db0714a0a82c] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.003582536s
addons_test.go:903: (dbg) Run:  kubectl --context addons-488661 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-488661 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-488661 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [a815303c-37d1-4fc5-b9e7-4192dc352dfe] Pending
helpers_test.go:352: "test-job-nginx-0" [a815303c-37d1-4fc5-b9e7-4192dc352dfe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [a815303c-37d1-4fc5-b9e7-4192dc352dfe] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.004519458s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-488661 addons disable volcano --alsologtostderr -v=1: (11.679854075s)
--- PASS: TestAddons/serial/Volcano (40.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-488661 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-488661 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-488661 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-488661 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fed9b841-b89c-421c-b448-ff818a79ce69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fed9b841-b89c-421c-b448-ff818a79ce69] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003387488s
addons_test.go:694: (dbg) Run:  kubectl --context addons-488661 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-488661 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-488661 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-488661 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.655437ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-h7tx9" [bbe8715b-0481-4f4a-8225-a6e24974eb8d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003691203s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-hmhlm" [72692204-20c0-4e8e-9525-d9ac94dc53a8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006376294s
addons_test.go:392: (dbg) Run:  kubectl --context addons-488661 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-488661 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-488661 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.224619835s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 ip
2025/09/08 12:37:49 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.34s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.570788ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-488661
addons_test.go:332: (dbg) Run:  kubectl --context addons-488661 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-488661 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-488661 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-488661 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [713546a6-c559-48f1-9577-8101b48eb50a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [713546a6-c559-48f1-9577-8101b48eb50a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003216418s
I0908 12:39:11.830191 2751114 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-488661 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-488661 addons disable ingress-dns --alsologtostderr -v=1: (1.15375671s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-488661 addons disable ingress --alsologtostderr -v=1: (7.80019258s)
--- PASS: TestAddons/parallel/Ingress (20.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gfng2" [b7b3354c-6d56-49a1-ba99-2eab03e3e877] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004579581s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.840147ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8dlpd" [e27b247a-1424-4ff4-8bfc-ca75c80a646e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003754973s
addons_test.go:463: (dbg) Run:  kubectl --context addons-488661 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 12:37:44.334594 2751114 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 12:37:44.337949 2751114 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 12:37:44.337981 2751114 kapi.go:107] duration metric: took 6.353709ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.365648ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-488661 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-488661 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [6bf558b3-d972-4421-a280-c672d8e9f62d] Pending
helpers_test.go:352: "task-pv-pod" [6bf558b3-d972-4421-a280-c672d8e9f62d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003849803s
addons_test.go:572: (dbg) Run:  kubectl --context addons-488661 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-488661 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-488661 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-488661 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-488661 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-488661 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-488661 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [54226cf6-a2d5-4672-979b-6adc0df216ba] Pending
helpers_test.go:352: "task-pv-pod-restore" [54226cf6-a2d5-4672-979b-6adc0df216ba] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [54226cf6-a2d5-4672-979b-6adc0df216ba] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0039299s
addons_test.go:614: (dbg) Run:  kubectl --context addons-488661 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-488661 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-488661 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-488661 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.935426751s)
--- PASS: TestAddons/parallel/CSI (59.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-488661 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-xnnlg" [a0b9f9d2-73a5-451f-8de2-443e18fd954f] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-xnnlg" [a0b9f9d2-73a5-451f-8de2-443e18fd954f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-xnnlg" [a0b9f9d2-73a5-451f-8de2-443e18fd954f] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.003490304s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-488661 addons disable headlamp --alsologtostderr -v=1: (5.814668093s)
--- PASS: TestAddons/parallel/Headlamp (22.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-nccpk" [47f4f3f6-df30-4d96-869d-13fdc4d16a79] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003874787s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-488661 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-488661 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-488661 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8ab6474b-d170-48ec-935c-85f6d762a786] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8ab6474b-d170-48ec-935c-85f6d762a786] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8ab6474b-d170-48ec-935c-85f6d762a786] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003235369s
addons_test.go:967: (dbg) Run:  kubectl --context addons-488661 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 ssh "cat /opt/local-path-provisioner/pvc-57247d03-b2c1-459e-b06c-4d005c6e1e92_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-488661 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-488661 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-488661 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.012327763s)
--- PASS: TestAddons/parallel/LocalPath (51.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ftlw7" [f8793eb5-afa6-4614-afe5-1e82019e2fdf] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004009688s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-ntz99" [2f060d47-8090-484f-a45e-67447eb12dc8] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003257065s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-488661 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-488661 addons disable yakd --alsologtostderr -v=1: (5.790233107s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-488661
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-488661: (11.969997274s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-488661
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-488661
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-488661
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (36.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-480035 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-480035 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.740735599s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-480035 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-480035 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-480035 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-480035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-480035
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-480035: (1.994235571s)
--- PASS: TestCertOptions (36.44s)

                                                
                                    
x
+
TestCertExpiration (226.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-713579 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-713579 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.581192707s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-713579 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-713579 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.067349508s)
helpers_test.go:175: Cleaning up "cert-expiration-713579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-713579
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-713579: (2.449721996s)
--- PASS: TestCertExpiration (226.10s)

                                                
                                    
x
+
TestForceSystemdFlag (37.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-905095 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-905095 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.851337039s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-905095 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-905095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-905095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-905095: (1.969052757s)
--- PASS: TestForceSystemdFlag (37.12s)

                                                
                                    
x
+
TestForceSystemdEnv (35.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-386836 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-386836 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.009701488s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-386836 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-386836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-386836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-386836: (2.032979315s)
--- PASS: TestForceSystemdEnv (35.37s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.75s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-672575 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-672575 --driver=docker  --container-runtime=containerd: (31.150330001s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-672575"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ki8yCpS2J4ee/agent.2774013" SSH_AGENT_PID="2774014" DOCKER_HOST=ssh://docker@127.0.0.1:36438 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ki8yCpS2J4ee/agent.2774013" SSH_AGENT_PID="2774014" DOCKER_HOST=ssh://docker@127.0.0.1:36438 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ki8yCpS2J4ee/agent.2774013" SSH_AGENT_PID="2774014" DOCKER_HOST=ssh://docker@127.0.0.1:36438 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.301855319s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ki8yCpS2J4ee/agent.2774013" SSH_AGENT_PID="2774014" DOCKER_HOST=ssh://docker@127.0.0.1:36438 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-672575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-672575
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-672575: (1.926745655s)
--- PASS: TestDockerEnvContainerd (46.75s)

                                                
                                    
x
+
TestErrorSpam/setup (31.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-265930 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-265930 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-265930 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-265930 --driver=docker  --container-runtime=containerd: (31.424838551s)
--- PASS: TestErrorSpam/setup (31.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 status
--- PASS: TestErrorSpam/status (1.38s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 stop: (1.258614445s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-265930 --log_dir /tmp/nospam-265930 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21508-2749258/.minikube/files/etc/test/nested/copy/2751114/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-936282 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0908 12:41:34.200629 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:34.207963 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:34.219292 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:34.240641 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:34.281969 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:34.363313 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:34.524732 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:34.846085 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:35.487518 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:36.768841 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:39.331057 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:44.452482 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:54.693895 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:42:15.175263 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-936282 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m25.330540352s)
--- PASS: TestFunctional/serial/StartWithProxy (85.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 12:42:35.162193 2751114 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-936282 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-936282 --alsologtostderr -v=8: (6.635610631s)
functional_test.go:678: soft start took 6.636903479s for "functional-936282" cluster.
I0908 12:42:41.798127 2751114 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-936282 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 cache add registry.k8s.io/pause:3.1: (1.357026933s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 cache add registry.k8s.io/pause:3.3: (1.117805302s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 cache add registry.k8s.io/pause:latest: (1.217894139s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-936282 /tmp/TestFunctionalserialCacheCmdcacheadd_local4065123358/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 cache add minikube-local-cache-test:functional-936282
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 cache delete minikube-local-cache-test:functional-936282
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-936282
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.869441ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 cache reload: (1.014258795s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 kubectl -- --context functional-936282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-936282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-936282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 12:42:56.137555 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-936282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.171622567s)
functional_test.go:776: restart took 45.171727704s for "functional-936282" cluster.
I0908 12:43:34.797264 2751114 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (45.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-936282 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 logs: (1.794802829s)
--- PASS: TestFunctional/serial/LogsCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 logs --file /tmp/TestFunctionalserialLogsFileCmd317179793/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 logs --file /tmp/TestFunctionalserialLogsFileCmd317179793/001/logs.txt: (1.76289872s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-936282 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-936282
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-936282: exit status 115 (655.524966ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31762 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-936282 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 config get cpus: exit status 14 (90.639987ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 config get cpus: exit status 14 (80.911267ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-936282 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-936282 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 2791192: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-936282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-936282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (266.884314ms)

                                                
                                                
-- stdout --
	* [functional-936282] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:44:24.175916 2790633 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:44:24.176067 2790633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:44:24.176078 2790633 out.go:374] Setting ErrFile to fd 2...
	I0908 12:44:24.176083 2790633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:44:24.176364 2790633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 12:44:24.176721 2790633 out.go:368] Setting JSON to false
	I0908 12:44:24.177675 2790633 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":66416,"bootTime":1757269048,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 12:44:24.177739 2790633 start.go:140] virtualization:  
	I0908 12:44:24.181332 2790633 out.go:179] * [functional-936282] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:44:24.186237 2790633 notify.go:220] Checking for updates...
	I0908 12:44:24.190321 2790633 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:44:24.193329 2790633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:44:24.196297 2790633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 12:44:24.199263 2790633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 12:44:24.202141 2790633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:44:24.205014 2790633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:44:24.208227 2790633 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 12:44:24.208829 2790633 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:44:24.247577 2790633 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:44:24.247691 2790633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:44:24.355316 2790633 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 12:44:24.34618744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:44:24.355420 2790633 docker.go:318] overlay module found
	I0908 12:44:24.360652 2790633 out.go:179] * Using the docker driver based on existing profile
	I0908 12:44:24.363404 2790633 start.go:304] selected driver: docker
	I0908 12:44:24.363426 2790633 start.go:918] validating driver "docker" against &{Name:functional-936282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-936282 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:44:24.363546 2790633 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:44:24.367302 2790633 out.go:203] 
	W0908 12:44:24.370231 2790633 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 12:44:24.373089 2790633 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-936282 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-936282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-936282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (254.180619ms)

                                                
                                                
-- stdout --
	* [functional-936282] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:44:24.809188 2790847 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:44:24.809331 2790847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:44:24.809345 2790847 out.go:374] Setting ErrFile to fd 2...
	I0908 12:44:24.809351 2790847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:44:24.810662 2790847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 12:44:24.811143 2790847 out.go:368] Setting JSON to false
	I0908 12:44:24.812231 2790847 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":66417,"bootTime":1757269048,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 12:44:24.812441 2790847 start.go:140] virtualization:  
	I0908 12:44:24.815858 2790847 out.go:179] * [functional-936282] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0908 12:44:24.819417 2790847 notify.go:220] Checking for updates...
	I0908 12:44:24.819989 2790847 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:44:24.823289 2790847 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:44:24.826191 2790847 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 12:44:24.829194 2790847 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 12:44:24.832091 2790847 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:44:24.836979 2790847 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:44:24.840457 2790847 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 12:44:24.841151 2790847 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:44:24.874707 2790847 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:44:24.874819 2790847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:44:24.944395 2790847 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 12:44:24.93435664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:44:24.944503 2790847 docker.go:318] overlay module found
	I0908 12:44:24.948309 2790847 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 12:44:24.951063 2790847 start.go:304] selected driver: docker
	I0908 12:44:24.951081 2790847 start.go:918] validating driver "docker" against &{Name:functional-936282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-936282 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:44:24.951173 2790847 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:44:24.957206 2790847 out.go:203] 
	W0908 12:44:24.959974 2790847 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 12:44:24.962787 2790847 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-936282 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-936282 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-wncv2" [43f5c4ee-e151-4184-aebd-80fc601e5139] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-wncv2" [43f5c4ee-e151-4184-aebd-80fc601e5139] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004285796s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31389
functional_test.go:1680: http://192.168.49.2:31389: success! body:
Request served by hello-node-connect-7d85dfc575-wncv2

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31389
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [023171f6-d763-4a16-a13e-f710a233c651] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003778846s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-936282 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-936282 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-936282 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-936282 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [cb91426b-9694-46be-8e27-011f0c9ffd49] Pending
helpers_test.go:352: "sp-pod" [cb91426b-9694-46be-8e27-011f0c9ffd49] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [cb91426b-9694-46be-8e27-011f0c9ffd49] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004126157s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-936282 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-936282 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-936282 delete -f testdata/storage-provisioner/pod.yaml: (1.344050528s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-936282 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b532d2c0-296e-48c2-adcf-6ac74269879a] Pending
helpers_test.go:352: "sp-pod" [b532d2c0-296e-48c2-adcf-6ac74269879a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0908 12:44:18.059794 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [b532d2c0-296e-48c2-adcf-6ac74269879a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003462646s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-936282 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh -n functional-936282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 cp functional-936282:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1448092359/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh -n functional-936282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh -n functional-936282 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2751114/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo cat /etc/test/nested/copy/2751114/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2751114.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo cat /etc/ssl/certs/2751114.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2751114.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo cat /usr/share/ca-certificates/2751114.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/27511142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo cat /etc/ssl/certs/27511142.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/27511142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo cat /usr/share/ca-certificates/27511142.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-936282 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 ssh "sudo systemctl is-active docker": exit status 1 (325.191956ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 ssh "sudo systemctl is-active crio": exit status 1 (342.139193ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 version -o=json --components: (1.487700917s)
--- PASS: TestFunctional/parallel/Version/components (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-936282 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-936282
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-936282
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-936282 image ls --format short --alsologtostderr:
I0908 12:44:27.516957 2791471 out.go:360] Setting OutFile to fd 1 ...
I0908 12:44:27.517211 2791471 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:27.517239 2791471 out.go:374] Setting ErrFile to fd 2...
I0908 12:44:27.517257 2791471 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:27.517533 2791471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
I0908 12:44:27.518171 2791471 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:27.518362 2791471 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:27.518850 2791471 cli_runner.go:164] Run: docker container inspect functional-936282 --format={{.State.Status}}
I0908 12:44:27.543176 2791471 ssh_runner.go:195] Run: systemctl --version
I0908 12:44:27.543237 2791471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-936282
I0908 12:44:27.560042 2791471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36448 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/functional-936282/id_rsa Username:docker}
I0908 12:44:27.654803 2791471 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-936282 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-936282  │ sha256:4bcc86 │ 988B   │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0            │ sha256:a25f5e │ 15.8MB │
│ docker.io/kicbase/echo-server               │ functional-936282  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0            │ sha256:996be7 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.0            │ sha256:6fc32d │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/library/nginx                     │ latest             │ sha256:47ef87 │ 68.9MB │
│ localhost/my-image                          │ functional-936282  │ sha256:2a8f47 │ 831kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.0            │ sha256:d29193 │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/nginx                     │ alpine             │ sha256:35f3cb │ 22.9MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-936282 image ls --format table --alsologtostderr:
I0908 12:44:33.639009 2791862 out.go:360] Setting OutFile to fd 1 ...
I0908 12:44:33.639112 2791862 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:33.639168 2791862 out.go:374] Setting ErrFile to fd 2...
I0908 12:44:33.639172 2791862 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:33.639434 2791862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
I0908 12:44:33.640108 2791862 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:33.640240 2791862 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:33.640695 2791862 cli_runner.go:164] Run: docker container inspect functional-936282 --format={{.State.Status}}
I0908 12:44:33.658463 2791862 ssh_runner.go:195] Run: systemctl --version
I0908 12:44:33.658515 2791862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-936282
I0908 12:44:33.679153 2791862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36448 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/functional-936282/id_rsa Username:docker}
I0908 12:44:33.767129 2791862 ssh_runner.go:195] Run: sudo crictl images --output json
2025/09/08 12:44:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-936282 image ls --format json --alsologtostderr:
[{"id":"sha256:47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57"],"repoTags":["docker.io/library/nginx:latest"],"size":"68855984"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c
0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"20720494"},{"id":"sha256:6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"22788036"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-936282","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:2a8f4786ffacd9f557dce64af6100e5df390acb9987a6d903a19f5f9a7b6c8f2","repoDi
gests":[],"repoTags":["localhost/my-image:functional-936282"],"size":"830618"},{"id":"sha256:d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"24570751"},{"id":"sha256:a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"15779792"},{"id":"sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22948447"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6c
eeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:4bcc863fe960ee1a1cd74cfab935c09b4cabe6e475c0998da97cb
88d716b5fdf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-936282"],"size":"988"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-936282 image ls --format json --alsologtostderr:
I0908 12:44:33.365667 2791827 out.go:360] Setting OutFile to fd 1 ...
I0908 12:44:33.365832 2791827 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:33.365844 2791827 out.go:374] Setting ErrFile to fd 2...
I0908 12:44:33.365849 2791827 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:33.366096 2791827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
I0908 12:44:33.366775 2791827 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:33.366892 2791827 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:33.367324 2791827 cli_runner.go:164] Run: docker container inspect functional-936282 --format={{.State.Status}}
I0908 12:44:33.386334 2791827 ssh_runner.go:195] Run: systemctl --version
I0908 12:44:33.386439 2791827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-936282
I0908 12:44:33.414620 2791827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36448 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/functional-936282/id_rsa Username:docker}
I0908 12:44:33.511223 2791827 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-936282 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-936282
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
repoTags:
- docker.io/library/nginx:latest
size: "68855984"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "20720494"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
repoTags:
- docker.io/library/nginx:alpine
size: "22948447"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "24570751"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:4bcc863fe960ee1a1cd74cfab935c09b4cabe6e475c0998da97cb88d716b5fdf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-936282
size: "988"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "22788036"
- id: sha256:a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "15779792"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-936282 image ls --format yaml --alsologtostderr:
I0908 12:44:27.757231 2791504 out.go:360] Setting OutFile to fd 1 ...
I0908 12:44:27.757354 2791504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:27.757366 2791504 out.go:374] Setting ErrFile to fd 2...
I0908 12:44:27.757370 2791504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:27.757604 2791504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
I0908 12:44:27.758198 2791504 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:27.758321 2791504 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:27.759673 2791504 cli_runner.go:164] Run: docker container inspect functional-936282 --format={{.State.Status}}
I0908 12:44:27.778409 2791504 ssh_runner.go:195] Run: systemctl --version
I0908 12:44:27.778497 2791504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-936282
I0908 12:44:27.795581 2791504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36448 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/functional-936282/id_rsa Username:docker}
I0908 12:44:27.890874 2791504 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 ssh pgrep buildkitd: exit status 1 (281.245344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image build -t localhost/my-image:functional-936282 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 image build -t localhost/my-image:functional-936282 testdata/build --alsologtostderr: (4.778102448s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-936282 image build -t localhost/my-image:functional-936282 testdata/build --alsologtostderr:
I0908 12:44:28.281760 2791594 out.go:360] Setting OutFile to fd 1 ...
I0908 12:44:28.283320 2791594 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:28.283371 2791594 out.go:374] Setting ErrFile to fd 2...
I0908 12:44:28.283391 2791594 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:44:28.283678 2791594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
I0908 12:44:28.284365 2791594 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:28.286309 2791594 config.go:182] Loaded profile config "functional-936282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 12:44:28.286849 2791594 cli_runner.go:164] Run: docker container inspect functional-936282 --format={{.State.Status}}
I0908 12:44:28.306562 2791594 ssh_runner.go:195] Run: systemctl --version
I0908 12:44:28.306628 2791594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-936282
I0908 12:44:28.337853 2791594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36448 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/functional-936282/id_rsa Username:docker}
I0908 12:44:28.431679 2791594 build_images.go:161] Building image from path: /tmp/build.1146512619.tar
I0908 12:44:28.431969 2791594 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 12:44:28.441818 2791594 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1146512619.tar
I0908 12:44:28.445626 2791594 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1146512619.tar: stat -c "%s %y" /var/lib/minikube/build/build.1146512619.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1146512619.tar': No such file or directory
I0908 12:44:28.445652 2791594 ssh_runner.go:362] scp /tmp/build.1146512619.tar --> /var/lib/minikube/build/build.1146512619.tar (3072 bytes)
I0908 12:44:28.477409 2791594 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1146512619
I0908 12:44:28.487324 2791594 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1146512619 -xf /var/lib/minikube/build/build.1146512619.tar
I0908 12:44:28.496421 2791594 containerd.go:394] Building image: /var/lib/minikube/build/build.1146512619
I0908 12:44:28.496544 2791594 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1146512619 --local dockerfile=/var/lib/minikube/build/build.1146512619 --output type=image,name=localhost/my-image:functional-936282
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:9e5fe02f2ebdcc54b3d126b1035923b95485512ca5e692b8f8ae266d464dc685
#8 exporting manifest sha256:9e5fe02f2ebdcc54b3d126b1035923b95485512ca5e692b8f8ae266d464dc685 0.0s done
#8 exporting config sha256:2a8f4786ffacd9f557dce64af6100e5df390acb9987a6d903a19f5f9a7b6c8f2 done
#8 naming to localhost/my-image:functional-936282 done
#8 DONE 0.1s
I0908 12:44:32.951150 2791594 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1146512619 --local dockerfile=/var/lib/minikube/build/build.1146512619 --output type=image,name=localhost/my-image:functional-936282: (4.454547201s)
I0908 12:44:32.951250 2791594 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1146512619
I0908 12:44:32.981105 2791594 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1146512619.tar
I0908 12:44:32.999561 2791594 build_images.go:217] Built localhost/my-image:functional-936282 from /tmp/build.1146512619.tar
I0908 12:44:32.999615 2791594 build_images.go:133] succeeded building to: functional-936282
I0908 12:44:32.999626 2791594 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-936282
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image load --daemon kicbase/echo-server:functional-936282 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 image load --daemon kicbase/echo-server:functional-936282 --alsologtostderr: (1.195306893s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image load --daemon kicbase/echo-server:functional-936282 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-936282 image load --daemon kicbase/echo-server:functional-936282 --alsologtostderr: (1.245887372s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-936282 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-936282 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-24b6q" [52260813-15ad-48ae-be3f-eac5c40bd365] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-24b6q" [52260813-15ad-48ae-be3f-eac5c40bd365] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003511615s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-936282
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image load --daemon kicbase/echo-server:functional-936282 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image save kicbase/echo-server:functional-936282 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image rm kicbase/echo-server:functional-936282 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-936282
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 image save --daemon kicbase/echo-server:functional-936282 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-936282
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-936282 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-936282 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-936282 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 2787612: os: process already finished
helpers_test.go:519: unable to terminate pid 2787492: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-936282 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-936282 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-936282 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [94ccda05-72ce-4fab-a326-6e722ea2159a] Pending
helpers_test.go:352: "nginx-svc" [94ccda05-72ce-4fab-a326-6e722ea2159a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [94ccda05-72ce-4fab-a326-6e722ea2159a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003575209s
I0908 12:44:01.833066 2751114 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 service list -o json
functional_test.go:1504: Took "347.016595ms" to run "out/minikube-linux-arm64 -p functional-936282 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32181
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32181
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-936282 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.45.121 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-936282 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "382.740062ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "83.318305ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "392.050626ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.632283ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdany-port3790868434/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757335453145642401" to /tmp/TestFunctionalparallelMountCmdany-port3790868434/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757335453145642401" to /tmp/TestFunctionalparallelMountCmdany-port3790868434/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757335453145642401" to /tmp/TestFunctionalparallelMountCmdany-port3790868434/001/test-1757335453145642401
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.606962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 12:44:13.475351 2751114 retry.go:31] will retry after 514.771871ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 12:44 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 12:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 12:44 test-1757335453145642401
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh cat /mount-9p/test-1757335453145642401
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-936282 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [46855e55-e28c-4297-9cb4-1e79e8fe9851] Pending
helpers_test.go:352: "busybox-mount" [46855e55-e28c-4297-9cb4-1e79e8fe9851] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [46855e55-e28c-4297-9cb4-1e79e8fe9851] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [46855e55-e28c-4297-9cb4-1e79e8fe9851] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004250525s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-936282 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdany-port3790868434/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdspecific-port124439428/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.216691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 12:44:21.567178 2751114 retry.go:31] will retry after 408.843326ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdspecific-port124439428/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 ssh "sudo umount -f /mount-9p": exit status 1 (267.8511ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-936282 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdspecific-port124439428/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3194361260/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3194361260/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3194361260/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T" /mount1: exit status 1 (604.894206ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 12:44:23.595879 2751114 retry.go:31] will retry after 525.344833ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-936282 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-936282 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3194361260/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3194361260/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-936282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3194361260/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-936282
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-936282
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-936282
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (134.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0908 12:46:34.195430 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m13.829823094s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (134.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (43.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- rollout status deployment/busybox
E0908 12:47:01.902545 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 kubectl -- rollout status deployment/busybox: (40.486685832s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-lzn96 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-v59vk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-x4dkx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-lzn96 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-v59vk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-x4dkx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-lzn96 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-v59vk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-x4dkx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (43.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-lzn96 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-lzn96 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-v59vk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-v59vk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-x4dkx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 kubectl -- exec busybox-7b57f96db7-x4dkx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (16.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 node add --alsologtostderr -v 5: (14.985379637s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5: (1.405059081s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (16.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-879585 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.163068258s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 status --output json --alsologtostderr -v 5: (1.018126226s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp testdata/cp-test.txt ha-879585:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1969010147/001/cp-test_ha-879585.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585:/home/docker/cp-test.txt ha-879585-m02:/home/docker/cp-test_ha-879585_ha-879585-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m02 "sudo cat /home/docker/cp-test_ha-879585_ha-879585-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585:/home/docker/cp-test.txt ha-879585-m03:/home/docker/cp-test_ha-879585_ha-879585-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m03 "sudo cat /home/docker/cp-test_ha-879585_ha-879585-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585:/home/docker/cp-test.txt ha-879585-m04:/home/docker/cp-test_ha-879585_ha-879585-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m04 "sudo cat /home/docker/cp-test_ha-879585_ha-879585-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp testdata/cp-test.txt ha-879585-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1969010147/001/cp-test_ha-879585-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m02:/home/docker/cp-test.txt ha-879585:/home/docker/cp-test_ha-879585-m02_ha-879585.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585 "sudo cat /home/docker/cp-test_ha-879585-m02_ha-879585.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m02:/home/docker/cp-test.txt ha-879585-m03:/home/docker/cp-test_ha-879585-m02_ha-879585-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m03 "sudo cat /home/docker/cp-test_ha-879585-m02_ha-879585-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m02:/home/docker/cp-test.txt ha-879585-m04:/home/docker/cp-test_ha-879585-m02_ha-879585-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m04 "sudo cat /home/docker/cp-test_ha-879585-m02_ha-879585-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp testdata/cp-test.txt ha-879585-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1969010147/001/cp-test_ha-879585-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m03:/home/docker/cp-test.txt ha-879585:/home/docker/cp-test_ha-879585-m03_ha-879585.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585 "sudo cat /home/docker/cp-test_ha-879585-m03_ha-879585.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m03:/home/docker/cp-test.txt ha-879585-m02:/home/docker/cp-test_ha-879585-m03_ha-879585-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m02 "sudo cat /home/docker/cp-test_ha-879585-m03_ha-879585-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m03:/home/docker/cp-test.txt ha-879585-m04:/home/docker/cp-test_ha-879585-m03_ha-879585-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m04 "sudo cat /home/docker/cp-test_ha-879585-m03_ha-879585-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp testdata/cp-test.txt ha-879585-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1969010147/001/cp-test_ha-879585-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m04:/home/docker/cp-test.txt ha-879585:/home/docker/cp-test_ha-879585-m04_ha-879585.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585 "sudo cat /home/docker/cp-test_ha-879585-m04_ha-879585.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m04:/home/docker/cp-test.txt ha-879585-m02:/home/docker/cp-test_ha-879585-m04_ha-879585-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m02 "sudo cat /home/docker/cp-test_ha-879585-m04_ha-879585-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 cp ha-879585-m04:/home/docker/cp-test.txt ha-879585-m03:/home/docker/cp-test_ha-879585-m04_ha-879585-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 ssh -n ha-879585-m03 "sudo cat /home/docker/cp-test_ha-879585-m04_ha-879585-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 node stop m02 --alsologtostderr -v 5: (12.057598668s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5: exit status 7 (718.339981ms)

                                                
                                                
-- stdout --
	ha-879585
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-879585-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-879585-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-879585-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:48:27.159948 2808404 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:48:27.160273 2808404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:48:27.160326 2808404 out.go:374] Setting ErrFile to fd 2...
	I0908 12:48:27.160346 2808404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:48:27.160875 2808404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 12:48:27.161153 2808404 out.go:368] Setting JSON to false
	I0908 12:48:27.161223 2808404 mustload.go:65] Loading cluster: ha-879585
	I0908 12:48:27.161699 2808404 config.go:182] Loaded profile config "ha-879585": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 12:48:27.161747 2808404 status.go:174] checking status of ha-879585 ...
	I0908 12:48:27.162288 2808404 cli_runner.go:164] Run: docker container inspect ha-879585 --format={{.State.Status}}
	I0908 12:48:27.162431 2808404 notify.go:220] Checking for updates...
	I0908 12:48:27.183792 2808404 status.go:371] ha-879585 host status = "Running" (err=<nil>)
	I0908 12:48:27.183815 2808404 host.go:66] Checking if "ha-879585" exists ...
	I0908 12:48:27.184141 2808404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-879585
	I0908 12:48:27.214694 2808404 host.go:66] Checking if "ha-879585" exists ...
	I0908 12:48:27.215254 2808404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:48:27.215328 2808404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-879585
	I0908 12:48:27.238187 2808404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36453 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/ha-879585/id_rsa Username:docker}
	I0908 12:48:27.327712 2808404 ssh_runner.go:195] Run: systemctl --version
	I0908 12:48:27.332177 2808404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:48:27.344208 2808404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:48:27.405525 2808404 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-08 12:48:27.395434447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:48:27.406102 2808404 kubeconfig.go:125] found "ha-879585" server: "https://192.168.49.254:8443"
	I0908 12:48:27.406151 2808404 api_server.go:166] Checking apiserver status ...
	I0908 12:48:27.406200 2808404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:48:27.419427 2808404 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1505/cgroup
	I0908 12:48:27.429274 2808404 api_server.go:182] apiserver freezer: "2:freezer:/docker/6e3ee7ccba47ff7eccb98b81e8683f1147900aa1c0bb5700b610d9c72e404308/kubepods/burstable/pod4a571e4be54a63b560fa84467190c1f9/7fddc7eb488fd8965207a489d119acff32d98317bdecc3d5434b5b01037fc5a3"
	I0908 12:48:27.429340 2808404 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6e3ee7ccba47ff7eccb98b81e8683f1147900aa1c0bb5700b610d9c72e404308/kubepods/burstable/pod4a571e4be54a63b560fa84467190c1f9/7fddc7eb488fd8965207a489d119acff32d98317bdecc3d5434b5b01037fc5a3/freezer.state
	I0908 12:48:27.438324 2808404 api_server.go:204] freezer state: "THAWED"
	I0908 12:48:27.438399 2808404 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 12:48:27.446917 2808404 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 12:48:27.446946 2808404 status.go:463] ha-879585 apiserver status = Running (err=<nil>)
	I0908 12:48:27.446957 2808404 status.go:176] ha-879585 status: &{Name:ha-879585 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:48:27.446998 2808404 status.go:174] checking status of ha-879585-m02 ...
	I0908 12:48:27.447328 2808404 cli_runner.go:164] Run: docker container inspect ha-879585-m02 --format={{.State.Status}}
	I0908 12:48:27.464852 2808404 status.go:371] ha-879585-m02 host status = "Stopped" (err=<nil>)
	I0908 12:48:27.464876 2808404 status.go:384] host is not running, skipping remaining checks
	I0908 12:48:27.464883 2808404 status.go:176] ha-879585-m02 status: &{Name:ha-879585-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:48:27.464905 2808404 status.go:174] checking status of ha-879585-m03 ...
	I0908 12:48:27.465256 2808404 cli_runner.go:164] Run: docker container inspect ha-879585-m03 --format={{.State.Status}}
	I0908 12:48:27.482538 2808404 status.go:371] ha-879585-m03 host status = "Running" (err=<nil>)
	I0908 12:48:27.482625 2808404 host.go:66] Checking if "ha-879585-m03" exists ...
	I0908 12:48:27.483640 2808404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-879585-m03
	I0908 12:48:27.501317 2808404 host.go:66] Checking if "ha-879585-m03" exists ...
	I0908 12:48:27.501617 2808404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:48:27.501657 2808404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-879585-m03
	I0908 12:48:27.520098 2808404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/ha-879585-m03/id_rsa Username:docker}
	I0908 12:48:27.607399 2808404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:48:27.619298 2808404 kubeconfig.go:125] found "ha-879585" server: "https://192.168.49.254:8443"
	I0908 12:48:27.619327 2808404 api_server.go:166] Checking apiserver status ...
	I0908 12:48:27.619379 2808404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:48:27.630822 2808404 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	I0908 12:48:27.640022 2808404 api_server.go:182] apiserver freezer: "2:freezer:/docker/6338c679812615a1bba6e7269f012e239d5fa069278ffa8b7119c6632d85c4bb/kubepods/burstable/pod8fcec046f6c0a78fe4dcd19273979fca/4ae5f7604d57a89dc46afb0231b325699fdccc45ef66b697a5c061cdf9dd8f92"
	I0908 12:48:27.640114 2808404 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6338c679812615a1bba6e7269f012e239d5fa069278ffa8b7119c6632d85c4bb/kubepods/burstable/pod8fcec046f6c0a78fe4dcd19273979fca/4ae5f7604d57a89dc46afb0231b325699fdccc45ef66b697a5c061cdf9dd8f92/freezer.state
	I0908 12:48:27.648251 2808404 api_server.go:204] freezer state: "THAWED"
	I0908 12:48:27.648330 2808404 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 12:48:27.656665 2808404 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 12:48:27.656692 2808404 status.go:463] ha-879585-m03 apiserver status = Running (err=<nil>)
	I0908 12:48:27.656702 2808404 status.go:176] ha-879585-m03 status: &{Name:ha-879585-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:48:27.656747 2808404 status.go:174] checking status of ha-879585-m04 ...
	I0908 12:48:27.657064 2808404 cli_runner.go:164] Run: docker container inspect ha-879585-m04 --format={{.State.Status}}
	I0908 12:48:27.678337 2808404 status.go:371] ha-879585-m04 host status = "Running" (err=<nil>)
	I0908 12:48:27.678435 2808404 host.go:66] Checking if "ha-879585-m04" exists ...
	I0908 12:48:27.678745 2808404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-879585-m04
	I0908 12:48:27.696229 2808404 host.go:66] Checking if "ha-879585-m04" exists ...
	I0908 12:48:27.696598 2808404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:48:27.696644 2808404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-879585-m04
	I0908 12:48:27.714080 2808404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36468 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/ha-879585-m04/id_rsa Username:docker}
	I0908 12:48:27.807285 2808404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:48:27.820665 2808404 status.go:176] ha-879585-m04 status: &{Name:ha-879585-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 node start m02 --alsologtostderr -v 5: (12.44710564s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5: (1.364956334s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.300620795s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (109.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 stop --alsologtostderr -v 5
E0908 12:48:47.956807 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:47.963146 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:47.974536 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:47.995992 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:48.037442 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:48.118897 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:48.280370 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:48.602012 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:49.243784 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:50.525310 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:53.087155 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:58.209405 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:49:08.450796 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 stop --alsologtostderr -v 5: (37.188508593s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 start --wait true --alsologtostderr -v 5
E0908 12:49:28.932755 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:50:09.894619 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 start --wait true --alsologtostderr -v 5: (1m12.33719637s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (109.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 node delete m03 --alsologtostderr -v 5: (9.584928205s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 stop --alsologtostderr -v 5: (35.855413411s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5: exit status 7 (115.537339ms)

                                                
                                                
-- stdout --
	ha-879585
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-879585-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-879585-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:51:20.811497 2823297 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:51:20.811634 2823297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:51:20.811645 2823297 out.go:374] Setting ErrFile to fd 2...
	I0908 12:51:20.811650 2823297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:51:20.811903 2823297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 12:51:20.812130 2823297 out.go:368] Setting JSON to false
	I0908 12:51:20.812169 2823297 mustload.go:65] Loading cluster: ha-879585
	I0908 12:51:20.812262 2823297 notify.go:220] Checking for updates...
	I0908 12:51:20.812596 2823297 config.go:182] Loaded profile config "ha-879585": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 12:51:20.812624 2823297 status.go:174] checking status of ha-879585 ...
	I0908 12:51:20.813438 2823297 cli_runner.go:164] Run: docker container inspect ha-879585 --format={{.State.Status}}
	I0908 12:51:20.830320 2823297 status.go:371] ha-879585 host status = "Stopped" (err=<nil>)
	I0908 12:51:20.830365 2823297 status.go:384] host is not running, skipping remaining checks
	I0908 12:51:20.830373 2823297 status.go:176] ha-879585 status: &{Name:ha-879585 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:51:20.830397 2823297 status.go:174] checking status of ha-879585-m02 ...
	I0908 12:51:20.830699 2823297 cli_runner.go:164] Run: docker container inspect ha-879585-m02 --format={{.State.Status}}
	I0908 12:51:20.859978 2823297 status.go:371] ha-879585-m02 host status = "Stopped" (err=<nil>)
	I0908 12:51:20.860009 2823297 status.go:384] host is not running, skipping remaining checks
	I0908 12:51:20.860016 2823297 status.go:176] ha-879585-m02 status: &{Name:ha-879585-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:51:20.860033 2823297 status.go:174] checking status of ha-879585-m04 ...
	I0908 12:51:20.860407 2823297 cli_runner.go:164] Run: docker container inspect ha-879585-m04 --format={{.State.Status}}
	I0908 12:51:20.877346 2823297 status.go:371] ha-879585-m04 host status = "Stopped" (err=<nil>)
	I0908 12:51:20.877369 2823297 status.go:384] host is not running, skipping remaining checks
	I0908 12:51:20.877376 2823297 status.go:176] ha-879585-m04 status: &{Name:ha-879585-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0908 12:51:31.817080 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:51:34.195822 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.718185251s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 node add --control-plane --alsologtostderr -v 5: (36.347715217s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-879585 status --alsologtostderr -v 5: (1.363356098s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.697924334s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-940301 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E0908 12:53:47.958536 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:54:15.658712 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-940301 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m31.170664529s)
--- PASS: TestJSONOutput/start/Command (91.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-940301 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-940301 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-940301 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-940301 --output=json --user=testUser: (5.852335141s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-307572 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-307572 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (105.075239ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b7963d7-3b88-484a-a744-4a3eff7e0ac7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-307572] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f13ba089-0535-474c-b978-770bf367fa83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"75cdedb3-06d0-4e16-895e-2f9133cbb39e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9e9dfbee-f7a1-4e5b-b044-f9344ab8b0f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig"}}
	{"specversion":"1.0","id":"7b68697f-68a5-4d32-afeb-3cfdab833d37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube"}}
	{"specversion":"1.0","id":"30ca06c4-b0e0-47bf-b9e1-d3a55fdfc1b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1e175bfc-531c-4594-9f32-ce85fcf44981","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bc23253a-7346-43de-9f03-2c33c5ca84f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-307572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-307572
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-643199 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-643199 --network=: (39.07336315s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-643199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-643199
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-643199: (2.125524484s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-099514 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-099514 --network=bridge: (32.689962553s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-099514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-099514
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-099514: (2.042745405s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.76s)

                                                
                                    
x
+
TestKicExistingNetwork (32.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 12:56:08.682180 2751114 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 12:56:08.698201 2751114 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 12:56:08.698272 2751114 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 12:56:08.698289 2751114 cli_runner.go:164] Run: docker network inspect existing-network
W0908 12:56:08.713712 2751114 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 12:56:08.713742 2751114 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 12:56:08.713757 2751114 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 12:56:08.713860 2751114 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 12:56:08.729954 2751114 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e8e4d680522a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e6:fc:21:01:0d:7c} reservation:<nil>}
I0908 12:56:08.730245 2751114 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cdf230}
I0908 12:56:08.730266 2751114 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 12:56:08.730315 2751114 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 12:56:08.785357 2751114 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-382197 --network=existing-network
E0908 12:56:34.195608 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-382197 --network=existing-network: (30.133894159s)
helpers_test.go:175: Cleaning up "existing-network-382197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-382197
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-382197: (2.069108237s)
I0908 12:56:41.005506 2751114 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.34s)

                                                
                                    
x
+
TestKicCustomSubnet (36.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-539662 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-539662 --subnet=192.168.60.0/24: (34.1024043s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-539662 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-539662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-539662
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-539662: (2.043329864s)
--- PASS: TestKicCustomSubnet (36.17s)

                                                
                                    
x
+
TestKicStaticIP (36.35s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-492237 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-492237 --static-ip=192.168.200.200: (34.015356312s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-492237 ip
helpers_test.go:175: Cleaning up "static-ip-492237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-492237
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-492237: (2.185899456s)
--- PASS: TestKicStaticIP (36.35s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-934851 --driver=docker  --container-runtime=containerd
E0908 12:57:57.263919 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-934851 --driver=docker  --container-runtime=containerd: (30.021476538s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-937419 --driver=docker  --container-runtime=containerd
E0908 12:58:47.958876 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-937419 --driver=docker  --container-runtime=containerd: (32.55910725s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-934851
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-937419
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-937419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-937419
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-937419: (1.931164736s)
helpers_test.go:175: Cleaning up "first-934851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-934851
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-934851: (2.266924629s)
--- PASS: TestMinikubeProfile (68.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-537660 --memory=3072 --mount-string /tmp/TestMountStartserial2137753522/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-537660 --memory=3072 --mount-string /tmp/TestMountStartserial2137753522/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.717220981s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-537660 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-539588 --memory=3072 --mount-string /tmp/TestMountStartserial2137753522/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-539588 --memory=3072 --mount-string /tmp/TestMountStartserial2137753522/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.645139615s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-539588 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-537660 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-537660 --alsologtostderr -v=5: (1.623407269s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-539588 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-539588
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-539588: (1.196152207s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-539588
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-539588: (6.954847743s)
--- PASS: TestMountStart/serial/RestartStopped (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-539588 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476357 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-476357 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m9.346076371s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-476357 -- rollout status deployment/busybox: (16.21510677s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-sz4wk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-tprqh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-sz4wk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-tprqh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-sz4wk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-tprqh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-sz4wk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-sz4wk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-tprqh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476357 -- exec busybox-7b57f96db7-tprqh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (14.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-476357 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-476357 -v=5 --alsologtostderr: (13.876735061s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (14.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-476357 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp testdata/cp-test.txt multinode-476357:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp multinode-476357:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2879611309/001/cp-test_multinode-476357.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp multinode-476357:/home/docker/cp-test.txt multinode-476357-m02:/home/docker/cp-test_multinode-476357_multinode-476357-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m02 "sudo cat /home/docker/cp-test_multinode-476357_multinode-476357-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp multinode-476357:/home/docker/cp-test.txt multinode-476357-m03:/home/docker/cp-test_multinode-476357_multinode-476357-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m03 "sudo cat /home/docker/cp-test_multinode-476357_multinode-476357-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp testdata/cp-test.txt multinode-476357-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp multinode-476357-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2879611309/001/cp-test_multinode-476357-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp multinode-476357-m02:/home/docker/cp-test.txt multinode-476357:/home/docker/cp-test_multinode-476357-m02_multinode-476357.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357 "sudo cat /home/docker/cp-test_multinode-476357-m02_multinode-476357.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp multinode-476357-m02:/home/docker/cp-test.txt multinode-476357-m03:/home/docker/cp-test_multinode-476357-m02_multinode-476357-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m03 "sudo cat /home/docker/cp-test_multinode-476357-m02_multinode-476357-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp testdata/cp-test.txt multinode-476357-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp multinode-476357-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2879611309/001/cp-test_multinode-476357-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp multinode-476357-m03:/home/docker/cp-test.txt multinode-476357:/home/docker/cp-test_multinode-476357-m03_multinode-476357.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357 "sudo cat /home/docker/cp-test_multinode-476357-m03_multinode-476357.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 cp multinode-476357-m03:/home/docker/cp-test.txt multinode-476357-m02:/home/docker/cp-test_multinode-476357-m03_multinode-476357-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 ssh -n multinode-476357-m02 "sudo cat /home/docker/cp-test_multinode-476357-m03_multinode-476357-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-476357 node stop m03: (1.207062638s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-476357 status: exit status 7 (555.787624ms)

                                                
                                                
-- stdout --
	multinode-476357
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-476357-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-476357-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-476357 status --alsologtostderr: exit status 7 (524.427403ms)

                                                
                                                
-- stdout --
	multinode-476357
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-476357-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-476357-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:01:25.317819 2877819 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:01:25.317946 2877819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:01:25.317958 2877819 out.go:374] Setting ErrFile to fd 2...
	I0908 13:01:25.317968 2877819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:01:25.318278 2877819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:01:25.318574 2877819 out.go:368] Setting JSON to false
	I0908 13:01:25.318643 2877819 mustload.go:65] Loading cluster: multinode-476357
	I0908 13:01:25.318733 2877819 notify.go:220] Checking for updates...
	I0908 13:01:25.319142 2877819 config.go:182] Loaded profile config "multinode-476357": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:01:25.319174 2877819 status.go:174] checking status of multinode-476357 ...
	I0908 13:01:25.319771 2877819 cli_runner.go:164] Run: docker container inspect multinode-476357 --format={{.State.Status}}
	I0908 13:01:25.340469 2877819 status.go:371] multinode-476357 host status = "Running" (err=<nil>)
	I0908 13:01:25.340496 2877819 host.go:66] Checking if "multinode-476357" exists ...
	I0908 13:01:25.340812 2877819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-476357
	I0908 13:01:25.372839 2877819 host.go:66] Checking if "multinode-476357" exists ...
	I0908 13:01:25.373152 2877819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:01:25.373254 2877819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-476357
	I0908 13:01:25.391609 2877819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36573 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/multinode-476357/id_rsa Username:docker}
	I0908 13:01:25.479719 2877819 ssh_runner.go:195] Run: systemctl --version
	I0908 13:01:25.484632 2877819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:01:25.496179 2877819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:01:25.557362 2877819 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 13:01:25.547341433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:01:25.557920 2877819 kubeconfig.go:125] found "multinode-476357" server: "https://192.168.67.2:8443"
	I0908 13:01:25.557956 2877819 api_server.go:166] Checking apiserver status ...
	I0908 13:01:25.558000 2877819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:01:25.570174 2877819 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	I0908 13:01:25.581004 2877819 api_server.go:182] apiserver freezer: "2:freezer:/docker/abf33cf012efbd4abe887bd353815382f3753cb9eae4c4211dafb6e803bf3f82/kubepods/burstable/podba7ecac3bb2a95efda18facfe2becc5a/e7cffb0e04929669bd2d362e3223be9d8b307cec1537e089fa64a34ee61344ee"
	I0908 13:01:25.581086 2877819 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/abf33cf012efbd4abe887bd353815382f3753cb9eae4c4211dafb6e803bf3f82/kubepods/burstable/podba7ecac3bb2a95efda18facfe2becc5a/e7cffb0e04929669bd2d362e3223be9d8b307cec1537e089fa64a34ee61344ee/freezer.state
	I0908 13:01:25.590807 2877819 api_server.go:204] freezer state: "THAWED"
	I0908 13:01:25.590843 2877819 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 13:01:25.599055 2877819 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 13:01:25.599086 2877819 status.go:463] multinode-476357 apiserver status = Running (err=<nil>)
	I0908 13:01:25.599098 2877819 status.go:176] multinode-476357 status: &{Name:multinode-476357 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:01:25.599124 2877819 status.go:174] checking status of multinode-476357-m02 ...
	I0908 13:01:25.599450 2877819 cli_runner.go:164] Run: docker container inspect multinode-476357-m02 --format={{.State.Status}}
	I0908 13:01:25.617150 2877819 status.go:371] multinode-476357-m02 host status = "Running" (err=<nil>)
	I0908 13:01:25.617177 2877819 host.go:66] Checking if "multinode-476357-m02" exists ...
	I0908 13:01:25.617499 2877819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-476357-m02
	I0908 13:01:25.634196 2877819 host.go:66] Checking if "multinode-476357-m02" exists ...
	I0908 13:01:25.634592 2877819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:01:25.634648 2877819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-476357-m02
	I0908 13:01:25.652027 2877819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36578 SSHKeyPath:/home/jenkins/minikube-integration/21508-2749258/.minikube/machines/multinode-476357-m02/id_rsa Username:docker}
	I0908 13:01:25.743748 2877819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:01:25.756624 2877819 status.go:176] multinode-476357-m02 status: &{Name:multinode-476357-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:01:25.756657 2877819 status.go:174] checking status of multinode-476357-m03 ...
	I0908 13:01:25.756983 2877819 cli_runner.go:164] Run: docker container inspect multinode-476357-m03 --format={{.State.Status}}
	I0908 13:01:25.773979 2877819 status.go:371] multinode-476357-m03 host status = "Stopped" (err=<nil>)
	I0908 13:01:25.773999 2877819 status.go:384] host is not running, skipping remaining checks
	I0908 13:01:25.774005 2877819 status.go:176] multinode-476357-m03 status: &{Name:multinode-476357-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-476357 node start m03 -v=5 --alsologtostderr: (7.271686447s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-476357
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-476357
E0908 13:01:34.195489 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-476357: (24.914829096s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476357 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-476357 --wait=true -v=5 --alsologtostderr: (53.666518021s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-476357
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-476357 node delete m03: (4.802395547s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-476357 stop: (23.737154345s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-476357 status: exit status 7 (104.083856ms)

                                                
                                                
-- stdout --
	multinode-476357
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-476357-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-476357 status --alsologtostderr: exit status 7 (97.468569ms)

                                                
                                                
-- stdout --
	multinode-476357
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-476357-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:03:21.915599 2886595 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:03:21.915724 2886595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:03:21.915735 2886595 out.go:374] Setting ErrFile to fd 2...
	I0908 13:03:21.915740 2886595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:03:21.915993 2886595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:03:21.916175 2886595 out.go:368] Setting JSON to false
	I0908 13:03:21.916218 2886595 mustload.go:65] Loading cluster: multinode-476357
	I0908 13:03:21.916308 2886595 notify.go:220] Checking for updates...
	I0908 13:03:21.916599 2886595 config.go:182] Loaded profile config "multinode-476357": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:03:21.916628 2886595 status.go:174] checking status of multinode-476357 ...
	I0908 13:03:21.917154 2886595 cli_runner.go:164] Run: docker container inspect multinode-476357 --format={{.State.Status}}
	I0908 13:03:21.937157 2886595 status.go:371] multinode-476357 host status = "Stopped" (err=<nil>)
	I0908 13:03:21.937183 2886595 status.go:384] host is not running, skipping remaining checks
	I0908 13:03:21.937190 2886595 status.go:176] multinode-476357 status: &{Name:multinode-476357 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:03:21.937234 2886595 status.go:174] checking status of multinode-476357-m02 ...
	I0908 13:03:21.937545 2886595 cli_runner.go:164] Run: docker container inspect multinode-476357-m02 --format={{.State.Status}}
	I0908 13:03:21.958696 2886595 status.go:371] multinode-476357-m02 host status = "Stopped" (err=<nil>)
	I0908 13:03:21.958723 2886595 status.go:384] host is not running, skipping remaining checks
	I0908 13:03:21.958729 2886595 status.go:176] multinode-476357-m02 status: &{Name:multinode-476357-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476357 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E0908 13:03:47.956406 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-476357 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.822948588s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476357 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-476357
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476357-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-476357-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.363982ms)

                                                
                                                
-- stdout --
	* [multinode-476357-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-476357-m02' is duplicated with machine name 'multinode-476357-m02' in profile 'multinode-476357'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476357-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-476357-m03 --driver=docker  --container-runtime=containerd: (29.723753819s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-476357
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-476357: exit status 80 (395.593387ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-476357 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-476357-m03 already exists in multinode-476357-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-476357-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-476357-m03: (1.959105212s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.23s)

                                                
                                    
x
+
TestPreload (141.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-126580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0908 13:05:11.020634 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-126580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m7.447797156s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-126580 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-126580 image pull gcr.io/k8s-minikube/busybox: (2.331597808s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-126580
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-126580: (5.772066483s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-126580 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0908 13:06:34.195352 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-126580 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m2.928113478s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-126580 image list
helpers_test.go:175: Cleaning up "test-preload-126580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-126580
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-126580: (2.291569523s)
--- PASS: TestPreload (141.01s)

                                                
                                    
x
+
TestScheduledStopUnix (107.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-695293 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-695293 --memory=3072 --driver=docker  --container-runtime=containerd: (31.425748277s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-695293 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-695293 -n scheduled-stop-695293
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-695293 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 13:07:44.638582 2751114 retry.go:31] will retry after 111.102µs: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.639763 2751114 retry.go:31] will retry after 204.439µs: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.640907 2751114 retry.go:31] will retry after 266.086µs: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.642041 2751114 retry.go:31] will retry after 257.606µs: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.643278 2751114 retry.go:31] will retry after 629.596µs: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.644428 2751114 retry.go:31] will retry after 961.807µs: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.646283 2751114 retry.go:31] will retry after 1.068898ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.647505 2751114 retry.go:31] will retry after 1.381716ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.649671 2751114 retry.go:31] will retry after 1.974209ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.651837 2751114 retry.go:31] will retry after 5.744589ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.658081 2751114 retry.go:31] will retry after 4.971157ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.663322 2751114 retry.go:31] will retry after 7.942906ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.671561 2751114 retry.go:31] will retry after 17.457124ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.689941 2751114 retry.go:31] will retry after 29.16858ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.720183 2751114 retry.go:31] will retry after 15.623187ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
I0908 13:07:44.736360 2751114 retry.go:31] will retry after 37.209906ms: open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/scheduled-stop-695293/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-695293 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-695293 -n scheduled-stop-695293
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-695293
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-695293 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0908 13:08:47.958548 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-695293
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-695293: exit status 7 (72.992699ms)

                                                
                                                
-- stdout --
	scheduled-stop-695293
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-695293 -n scheduled-stop-695293
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-695293 -n scheduled-stop-695293: exit status 7 (71.306871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-695293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-695293
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-695293: (4.592693806s)
--- PASS: TestScheduledStopUnix (107.59s)

                                                
                                    
x
+
TestInsufficientStorage (10.51s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-174996 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-174996 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.055453088s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1383cbc-5ffc-42bc-b2f6-b0793dae5a21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-174996] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"77cece9e-9593-46e9-8031-bb28422d5f2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"a9ef6eae-6321-4e7d-abc2-6556d7cfce7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"26d7d402-cd6e-411c-9e4d-da0184ecc956","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig"}}
	{"specversion":"1.0","id":"01b095de-db92-4713-8685-7d43f72201ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube"}}
	{"specversion":"1.0","id":"650409fa-c189-4bb4-b9cf-b97e7ffd0fb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bb6912ba-c3f6-48a6-a715-39c8dae91cd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"93d9e31f-a517-4a3b-8529-659449accb2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"07c850e6-cfe2-46f7-ae9c-9db8d535ad39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d8a74b95-ec94-466a-a090-cc20d922238c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"efd55669-fe9c-4683-9d79-f21c7024ffa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c08c5856-1f6b-4df6-9f23-161bb5531551","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-174996\" primary control-plane node in \"insufficient-storage-174996\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"560f1d93-05c0-4816-9f2b-0b8d1515c880","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5a7be7e-0af2-400b-a5c5-b34ce92d312e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff9887ce-8887-4832-9e80-8081da249c10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-174996 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-174996 --output=json --layout=cluster: exit status 7 (291.420272ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-174996","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-174996","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 13:09:08.626191 2905723 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-174996" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-174996 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-174996 --output=json --layout=cluster: exit status 7 (293.751137ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-174996","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-174996","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 13:09:08.921325 2905787 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-174996" does not appear in /home/jenkins/minikube-integration/21508-2749258/kubeconfig
	E0908 13:09:08.931136 2905787 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/insufficient-storage-174996/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-174996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-174996
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-174996: (1.867040982s)
--- PASS: TestInsufficientStorage (10.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1344921394 start -p running-upgrade-161050 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1344921394 start -p running-upgrade-161050 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.096807362s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-161050 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-161050 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.987553092s)
helpers_test.go:175: Cleaning up "running-upgrade-161050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-161050
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-161050: (2.102670338s)
--- PASS: TestRunningBinaryUpgrade (67.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (176.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-064194 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-064194 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.586081812s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-064194
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-064194: (1.233201925s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-064194 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-064194 status --format={{.Host}}: exit status 7 (100.883783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-064194 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0908 13:11:34.195945 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-064194 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m57.315120068s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-064194 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-064194 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-064194 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (132.710486ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-064194] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-064194
	    minikube start -p kubernetes-upgrade-064194 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0641942 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-064194 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-064194 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-064194 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.780876276s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-064194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-064194
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-064194: (2.473002894s)
--- PASS: TestKubernetesUpgrade (176.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (150.07s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.994448117 start -p missing-upgrade-767310 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.994448117 start -p missing-upgrade-767310 --memory=3072 --driver=docker  --container-runtime=containerd: (59.393248008s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-767310
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-767310
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-767310 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-767310 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m26.480182374s)
helpers_test.go:175: Cleaning up "missing-upgrade-767310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-767310
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-767310: (2.341709499s)
--- PASS: TestMissingContainerUpgrade (150.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277204 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-277204 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (100.339393ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-277204] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277204 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-277204 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.92912869s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-277204 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277204 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-277204 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.861041396s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-277204 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-277204 status -o json: exit status 2 (301.126306ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-277204","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-277204
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-277204: (1.901138513s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277204 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-277204 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (5.748273393s)
--- PASS: TestNoKubernetes/serial/Start (5.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-277204 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-277204 "sudo systemctl is-active --quiet service kubelet": exit status 1 (250.828063ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-277204
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-277204: (1.214129028s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277204 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-277204 --driver=docker  --container-runtime=containerd: (6.841454725s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-277204 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-277204 "sudo systemctl is-active --quiet service kubelet": exit status 1 (418.924905ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3734058707 start -p stopped-upgrade-951108 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3734058707 start -p stopped-upgrade-951108 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.629853792s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3734058707 -p stopped-upgrade-951108 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3734058707 -p stopped-upgrade-951108 stop: (1.254516976s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-951108 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-951108 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.069185255s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-951108
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-951108: (1.455553195s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.46s)

                                                
                                    
x
+
TestPause/serial/Start (116.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-864887 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0908 13:13:47.956808 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-864887 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m56.483794618s)
--- PASS: TestPause/serial/Start (116.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-207352 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-207352 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (199.142919ms)

                                                
                                                
-- stdout --
	* [false-207352] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:14:32.803389 2939142 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:14:32.803535 2939142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:14:32.803544 2939142 out.go:374] Setting ErrFile to fd 2...
	I0908 13:14:32.803549 2939142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:14:32.803821 2939142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-2749258/.minikube/bin
	I0908 13:14:32.804315 2939142 out.go:368] Setting JSON to false
	I0908 13:14:32.805503 2939142 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":68225,"bootTime":1757269048,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0908 13:14:32.805588 2939142 start.go:140] virtualization:  
	I0908 13:14:32.809165 2939142 out.go:179] * [false-207352] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:14:32.813242 2939142 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:14:32.813282 2939142 notify.go:220] Checking for updates...
	I0908 13:14:32.822274 2939142 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:14:32.825257 2939142 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-2749258/kubeconfig
	I0908 13:14:32.828072 2939142 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-2749258/.minikube
	I0908 13:14:32.831003 2939142 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:14:32.834513 2939142 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:14:32.838283 2939142 config.go:182] Loaded profile config "pause-864887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:14:32.838404 2939142 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:14:32.868091 2939142 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:14:32.868214 2939142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:14:32.932012 2939142 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:14:32.922148713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:14:32.932120 2939142 docker.go:318] overlay module found
	I0908 13:14:32.935384 2939142 out.go:179] * Using the docker driver based on user configuration
	I0908 13:14:32.938295 2939142 start.go:304] selected driver: docker
	I0908 13:14:32.938318 2939142 start.go:918] validating driver "docker" against <nil>
	I0908 13:14:32.938332 2939142 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:14:32.941971 2939142 out.go:203] 
	W0908 13:14:32.944836 2939142 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0908 13:14:32.947819 2939142 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-207352 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-207352" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:14:16 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-864887
contexts:
- context:
cluster: pause-864887
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:14:16 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-864887
name: pause-864887
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-864887
user:
client-certificate: /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/pause-864887/client.crt
client-key: /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/pause-864887/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-207352

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207352"

                                                
                                                
----------------------- debugLogs end: false-207352 [took: 3.440228905s] --------------------------------
helpers_test.go:175: Cleaning up "false-207352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-207352
--- PASS: TestNetworkPlugins/group/false (3.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-864887 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-864887 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.612376481s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.64s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-864887 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-864887 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-864887 --output=json --layout=cluster: exit status 2 (391.342188ms)

                                                
                                                
-- stdout --
	{"Name":"pause-864887","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-864887","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-864887 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-864887 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-864887 --alsologtostderr -v=5: (1.002691265s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-864887 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-864887 --alsologtostderr -v=5: (3.096384749s)
--- PASS: TestPause/serial/DeletePaused (3.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (6.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (6.070858593s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-864887
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-864887: exit status 1 (20.96932ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-864887: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (71.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E0908 13:16:34.195943 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m11.782988327s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (71.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-552634 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [379a2508-d391-4880-9b22-f53b1ad4f95f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [379a2508-d391-4880-9b22-f53b1ad4f95f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004170437s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-552634 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-552634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-552634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05377908s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-552634 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-552634 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-552634 --alsologtostderr -v=3: (12.184128775s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-552634 -n old-k8s-version-552634
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-552634 -n old-k8s-version-552634: exit status 7 (73.800534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-552634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E0908 13:18:47.955970 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-552634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.521621329s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-552634 -n old-k8s-version-552634
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m11.082195088s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-978911 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [230e6776-cb2d-4ce4-bb83-f85d647a97bb] Pending
helpers_test.go:352: "busybox" [230e6776-cb2d-4ce4-bb83-f85d647a97bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [230e6776-cb2d-4ce4-bb83-f85d647a97bb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004114229s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-978911 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-978911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.002027197s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-978911 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-978911 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-978911 --alsologtostderr -v=3: (12.033188172s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-978911 -n no-preload-978911
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-978911 -n no-preload-978911: exit status 7 (84.472168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-978911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-978911 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (53.053763726s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-978911 -n no-preload-978911
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-552634 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-552634 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-552634 -n old-k8s-version-552634
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-552634 -n old-k8s-version-552634: exit status 2 (328.620774ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-552634 -n old-k8s-version-552634
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-552634 -n old-k8s-version-552634: exit status 2 (328.447478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-552634 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-552634 -n old-k8s-version-552634
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-552634 -n old-k8s-version-552634
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0908 13:37:37.128390 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:37.134853 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:37.146237 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:37.167606 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:37.209069 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:37.291345 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:37.452827 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:37.774832 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:38.416479 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:39.698127 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:42.259942 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:47.382206 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:57.623614 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (56.156036094s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-026461 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ee3cb845-c2ef-4e1c-83a1-519e37d9deb4] Pending
helpers_test.go:352: "busybox" [ee3cb845-c2ef-4e1c-83a1-519e37d9deb4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ee3cb845-c2ef-4e1c-83a1-519e37d9deb4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.008478644s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-026461 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-026461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-026461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.019847336s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-026461 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-026461 --alsologtostderr -v=3
E0908 13:38:18.105496 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-026461 --alsologtostderr -v=3: (12.040965505s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-026461 -n embed-certs-026461
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-026461 -n embed-certs-026461: exit status 7 (75.760721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-026461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0908 13:38:31.024674 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:47.956915 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:59.067932 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-026461 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (52.851530599s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-026461 -n embed-certs-026461
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-978911 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-978911 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-978911 -n no-preload-978911
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-978911 -n no-preload-978911: exit status 2 (316.486474ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-978911 -n no-preload-978911
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-978911 -n no-preload-978911: exit status 2 (337.286283ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-978911 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-978911 -n no-preload-978911
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-978911 -n no-preload-978911
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0908 13:40:13.323517 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:13.329850 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:13.341229 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:13.362648 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:13.403958 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:13.485935 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:13.647166 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:13.968896 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:14.610427 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:15.891782 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:18.453040 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:20.989347 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:23.574958 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:33.816625 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:54.298202 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m33.43924916s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1b0be701-69d8-4f52-b26c-e8c9cc1eadec] Pending
helpers_test.go:352: "busybox" [1b0be701-69d8-4f52-b26c-e8c9cc1eadec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1b0be701-69d8-4f52-b26c-e8c9cc1eadec] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004134909s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-153420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-153420 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-153420 --alsologtostderr -v=3
E0908 13:41:34.196062 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:35.259917 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-153420 --alsologtostderr -v=3: (11.984043822s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420: exit status 7 (75.84622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-153420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-153420 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (50.613429111s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-026461 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-026461 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-026461 -n embed-certs-026461
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-026461 -n embed-certs-026461: exit status 2 (319.494586ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-026461 -n embed-certs-026461
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-026461 -n embed-certs-026461: exit status 2 (333.392237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-026461 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-026461 -n embed-certs-026461
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-026461 -n embed-certs-026461
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-194228 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0908 13:57:37.127807 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-194228 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (35.842214916s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-194228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-194228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.070589329s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-194228 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-194228 --alsologtostderr -v=3: (1.228570579s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194228 -n newest-cni-194228
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194228 -n newest-cni-194228: exit status 7 (70.741664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-194228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-194228 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-194228 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (15.5733158s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194228 -n newest-cni-194228
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-194228 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-194228 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-194228 -n newest-cni-194228
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-194228 -n newest-cni-194228: exit status 2 (331.162796ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-194228 -n newest-cni-194228
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-194228 -n newest-cni-194228: exit status 2 (320.466743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-194228 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-194228 -n newest-cni-194228
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-194228 -n newest-cni-194228
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0908 13:58:47.956854 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m38.916067424s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-207352 "pgrep -a kubelet"
I0908 14:00:08.848587 2751114 config.go:182] Loaded profile config "auto-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-207352 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6gsvr" [28fcdc67-3c9e-45e6-8766-5c376818ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6gsvr" [28fcdc67-3c9e-45e6-8766-5c376818ae2a] Running
E0908 14:00:13.323744 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003337673s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-207352 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-153420 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-153420 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-153420 --alsologtostderr -v=1: (1.07237556s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420: exit status 2 (366.975353ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420: exit status 2 (523.000015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-153420 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-153420 --alsologtostderr -v=1: (1.063980545s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-153420 -n default-k8s-diff-port-153420
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.16s)
E0908 14:10:56.676871 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/enable-default-cni-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:11:16.680855 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:11:17.158581 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/enable-default-cni-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:11:33.909680 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/custom-flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:11:34.195505 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:11:51.028807 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:11:58.119948 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/enable-default-cni-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:21.452068 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:37.128154 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:49.154183 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:16.386999 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:20.041298 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/enable-default-cni-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:28.730464 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:28.736853 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:28.748219 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:28.769685 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:28.811123 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:28.892525 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:29.054031 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:29.375896 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:30.018125 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:31.299949 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:33.862242 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:38.983677 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:47.955975 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:49.225425 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:50.044551 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/custom-flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:14:09.707574 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:14:17.751048 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/custom-flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:14:50.668934 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:09.130110 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:13.323173 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:26.060827 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:26.067228 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:26.078700 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:26.100161 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:26.141523 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:26.223144 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:26.384488 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:26.706443 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:27.348242 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:28.630333 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:31.192387 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:36.182635 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/enable-default-cni-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:36.314272 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:46.556504 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:16:03.883270 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/enable-default-cni-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:16:07.038391 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/bridge-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:16:12.590683 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (98.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m38.879566555s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (98.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-4mgw7" [e1a45656-4c4d-4c6d-a9e2-725f25f9b636] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003537218s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-207352 "pgrep -a kubelet"
I0908 14:02:27.733258 2751114 config.go:182] Loaded profile config "kindnet-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-207352 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cbwzk" [85cf6b39-d4c5-4674-b7c1-f655af383a30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cbwzk" [85cf6b39-d4c5-4674-b7c1-f655af383a30] Running
E0908 14:02:37.128451 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003690318s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-207352 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0908 14:03:47.956529 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/functional-936282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (50.369456818s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-207352 "pgrep -a kubelet"
I0908 14:03:49.760539 2751114 config.go:182] Loaded profile config "custom-flannel-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-207352 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j47fj" [176e666d-6b83-4e48-b076-c40c4896183e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j47fj" [176e666d-6b83-4e48-b076-c40c4896183e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003309295s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-207352 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0908 14:04:37.271300 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:09.130165 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:09.136603 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:09.148140 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:09.169602 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:09.211109 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:09.292527 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:09.454070 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:09.775904 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:10.417633 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:11.699992 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:13.323004 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:14.262036 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:19.384250 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:05:29.626300 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m15.040994059s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-207352 "pgrep -a kubelet"
I0908 14:05:35.926087 2751114 config.go:182] Loaded profile config "enable-default-cni-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-207352 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gr8wv" [5a01b465-fc22-4617-970d-92da73266b23] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gr8wv" [5a01b465-fc22-4617-970d-92da73266b23] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004480973s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-207352 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (142.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0908 14:06:16.680668 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:06:31.070544 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:06:34.195796 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:06:44.382487 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/default-k8s-diff-port-153420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:21.452286 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:21.458839 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:21.470371 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:21.491741 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:21.533134 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:21.614655 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:21.776223 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:22.097992 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:22.740005 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:24.021656 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:26.583014 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:31.705110 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:37.127849 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/old-k8s-version-552634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:41.947096 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:07:52.991864 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:08:02.429097 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (2m22.530915151s)
--- PASS: TestNetworkPlugins/group/flannel/Start (142.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-772fs" [00061805-a701-4522-b7d7-d3f31714e3ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00340178s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-207352 "pgrep -a kubelet"
I0908 14:08:35.044531 2751114 config.go:182] Loaded profile config "flannel-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-207352 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mf49n" [286b2065-094c-4374-873f-b3a943dcde9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mf49n" [286b2065-094c-4374-873f-b3a943dcde9a] Running
E0908 14:08:43.390531 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.008395102s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-207352 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0908 14:09:10.543892 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/custom-flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:09:31.026102 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/custom-flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:05.311943 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/kindnet-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:09.129905 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/auto-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:11.987373 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/custom-flannel-207352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:13.323497 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/no-preload-978911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-207352 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m18.087869585s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-207352 "pgrep -a kubelet"
I0908 14:10:25.785282 2751114 config.go:182] Loaded profile config "bridge-207352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-207352 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-td8c9" [07840ae3-e734-4b6d-b150-7952ba72b384] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-td8c9" [07840ae3-e734-4b6d-b150-7952ba72b384] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003764806s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-207352 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-207352 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (30/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-365666 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-365666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-365666
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-406618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-406618
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-207352 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-207352" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:14:16 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-864887
contexts:
- context:
cluster: pause-864887
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:14:16 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-864887
name: pause-864887
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-864887
user:
client-certificate: /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/pause-864887/client.crt
client-key: /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/pause-864887/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-207352

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207352"

                                                
                                                
----------------------- debugLogs end: kubenet-207352 [took: 3.478794391s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-207352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-207352
--- SKIP: TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0908 13:14:37.265584 2751114 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/addons-488661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: cilium-207352 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-207352" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-2749258/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:14:16 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-864887
contexts:
- context:
cluster: pause-864887
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:14:16 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-864887
name: pause-864887
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-864887
user:
client-certificate: /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/pause-864887/client.crt
client-key: /home/jenkins/minikube-integration/21508-2749258/.minikube/profiles/pause-864887/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-207352

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-207352" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207352"

                                                
                                                
----------------------- debugLogs end: cilium-207352 [took: 3.876608845s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-207352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-207352
--- SKIP: TestNetworkPlugins/group/cilium (4.04s)

                                                
                                    
Copied to clipboard