Test Report: Docker_Linux_containerd 21968

                    
                      c47dc458d63a230593369798adacaa3ab200078c:2025-11-23:42467
                    
                

Test fail (4/333)

Order failed test Duration
350 TestStartStop/group/old-k8s-version/serial/DeployApp 14.8
353 TestStartStop/group/embed-certs/serial/DeployApp 16.08
354 TestStartStop/group/no-preload/serial/DeployApp 13.05
367 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 14.34
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-709593 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bea346d9-0dca-482c-b9f9-7b71741b18d7] Pending
helpers_test.go:352: "busybox" [bea346d9-0dca-482c-b9f9-7b71741b18d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bea346d9-0dca-482c-b9f9-7b71741b18d7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005092881s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-709593 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-709593
helpers_test.go:243: (dbg) docker inspect old-k8s-version-709593:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384",
	        "Created": "2025-11-23T09:56:47.666891207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294280,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:56:47.720935343Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384/hostname",
	        "HostsPath": "/var/lib/docker/containers/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384/hosts",
	        "LogPath": "/var/lib/docker/containers/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384-json.log",
	        "Name": "/old-k8s-version-709593",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-709593:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-709593",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384",
	                "LowerDir": "/var/lib/docker/overlay2/ea62ac2e144b45f2284ed569ef537390326f82b0cb3d40e4d46e0ff286b7eb90-init/diff:/var/lib/docker/overlay2/c80a0dfdb81b7753b0a82e2bc6458805cbbad0a9ce5819c63e1d9b7b71ba226c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ea62ac2e144b45f2284ed569ef537390326f82b0cb3d40e4d46e0ff286b7eb90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ea62ac2e144b45f2284ed569ef537390326f82b0cb3d40e4d46e0ff286b7eb90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ea62ac2e144b45f2284ed569ef537390326f82b0cb3d40e4d46e0ff286b7eb90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-709593",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-709593/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-709593",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-709593",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-709593",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b544aba317fcf40d3e61edbec3240f39587be7e914d5c21fc69a6535b296b152",
	            "SandboxKey": "/var/run/docker/netns/b544aba317fc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-709593": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4fa988beb7cda350f0c11b822dcc90801b7cc48baa23c5c851d275a8d3ed42f8",
	                    "EndpointID": "da8f042fa74ebc4420b7404b4cac4144f9e37e8a91e96eb145a8c67dcfe76dd3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "76:bc:b6:48:41:0f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-709593",
	                        "29cb528aee84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-709593 -n old-k8s-version-709593
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-709593 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-709593 logs -n 25: (1.332498622s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-676928 sudo systemctl cat kubelet --no-pager                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /var/lib/kubelet/config.yaml                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status docker --all --full --no-pager                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat docker --no-pager                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/docker/daemon.json                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo docker system info                                                                                                                                       │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl status cri-docker --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat cri-docker --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cri-dockerd --version                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status containerd --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl cat containerd --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /lib/systemd/system/containerd.service                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/containerd/config.toml                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo containerd config dump                                                                                                                                   │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status crio --all --full --no-pager                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat crio --no-pager                                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                  │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo crio config                                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p bridge-676928                                                                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p disable-driver-mounts-178820                                                                                                                                                │ disable-driver-mounts-178820 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ start   │ -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ default-k8s-diff-port-696492 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:57:41
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:57:41.194019  311138 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:57:41.194298  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194308  311138 out.go:374] Setting ErrFile to fd 2...
	I1123 09:57:41.194312  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194606  311138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:57:41.195144  311138 out.go:368] Setting JSON to false
	I1123 09:57:41.196591  311138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2400,"bootTime":1763889461,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:57:41.196668  311138 start.go:143] virtualization: kvm guest
	I1123 09:57:41.199167  311138 out.go:179] * [default-k8s-diff-port-696492] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:57:41.201043  311138 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:57:41.201094  311138 notify.go:221] Checking for updates...
	I1123 09:57:41.204382  311138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:57:41.206017  311138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:57:41.207959  311138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:57:41.209794  311138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:57:41.211809  311138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:57:41.214009  311138 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214105  311138 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214180  311138 config.go:182] Loaded profile config "old-k8s-version-709593": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 09:57:41.214271  311138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:57:41.241306  311138 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:57:41.241474  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.312013  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.299959199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.312116  311138 docker.go:319] overlay module found
	I1123 09:57:41.314243  311138 out.go:179] * Using the docker driver based on user configuration
	I1123 09:57:41.316002  311138 start.go:309] selected driver: docker
	I1123 09:57:41.316024  311138 start.go:927] validating driver "docker" against <nil>
	I1123 09:57:41.316037  311138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:57:41.316751  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.385595  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.373759534 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.385794  311138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:57:41.386023  311138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:41.388087  311138 out.go:179] * Using Docker driver with root privileges
	I1123 09:57:41.389651  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:41.389725  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:41.389738  311138 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:57:41.389816  311138 start.go:353] cluster config:
	{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:41.391556  311138 out.go:179] * Starting "default-k8s-diff-port-696492" primary control-plane node in "default-k8s-diff-port-696492" cluster
	I1123 09:57:41.392982  311138 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:57:41.394476  311138 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:57:41.395978  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:41.396028  311138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 09:57:41.396036  311138 cache.go:65] Caching tarball of preloaded images
	I1123 09:57:41.396075  311138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:57:41.396157  311138 preload.go:238] Found /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 09:57:41.396175  311138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:57:41.396320  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:41.396374  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json: {Name:mk3b81d8fd8561a54828649e3e510565221995b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:41.422089  311138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:57:41.422112  311138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:57:41.422133  311138 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:57:41.422177  311138 start.go:360] acquireMachinesLock for default-k8s-diff-port-696492: {Name:mkc8ee83ed2b7a995e355ddec223dfeea233bbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:57:41.422316  311138 start.go:364] duration metric: took 112.296µs to acquireMachinesLock for "default-k8s-diff-port-696492"
	I1123 09:57:41.422500  311138 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:57:41.422632  311138 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:57:37.251564  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	W1123 09:57:39.751746  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	I1123 09:57:42.255256  300017 node_ready.go:49] node "embed-certs-412583" is "Ready"
	I1123 09:57:42.255291  300017 node_ready.go:38] duration metric: took 11.507766088s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:57:42.255310  300017 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:42.255471  300017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:42.277737  300017 api_server.go:72] duration metric: took 12.028046262s to wait for apiserver process to appear ...
	I1123 09:57:42.277770  300017 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:42.277792  300017 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:57:42.285468  300017 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:57:42.287274  300017 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:42.287395  300017 api_server.go:131] duration metric: took 9.61454ms to wait for apiserver health ...
	I1123 09:57:42.287422  300017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:42.294433  300017 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:42.294478  300017 system_pods.go:61] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.294486  300017 system_pods.go:61] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.294493  300017 system_pods.go:61] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.294499  300017 system_pods.go:61] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.294505  300017 system_pods.go:61] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.294510  300017 system_pods.go:61] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.294515  300017 system_pods.go:61] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.294526  300017 system_pods.go:61] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.294539  300017 system_pods.go:74] duration metric: took 7.098728ms to wait for pod list to return data ...
	I1123 09:57:42.294549  300017 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:42.298321  300017 default_sa.go:45] found service account: "default"
	I1123 09:57:42.298368  300017 default_sa.go:55] duration metric: took 3.811774ms for default service account to be created ...
	I1123 09:57:42.298382  300017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:42.302807  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.302871  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.302887  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.302896  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.302903  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.302927  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.302937  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.302943  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.302954  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.303049  300017 retry.go:31] will retry after 268.599682ms: missing components: kube-dns
	I1123 09:57:42.577490  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.577531  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.577541  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.577550  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.577557  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.577563  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.577568  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.577573  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.577581  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.577600  300017 retry.go:31] will retry after 240.156475ms: missing components: kube-dns
	I1123 09:57:42.822131  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.822171  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.822177  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.822182  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.822186  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.822190  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.822194  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.822197  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.822202  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.822216  300017 retry.go:31] will retry after 383.926777ms: missing components: kube-dns
	I1123 09:57:43.211532  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:43.211575  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running
	I1123 09:57:43.211585  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:43.211592  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:43.211600  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:43.211608  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:43.211624  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:43.211635  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:43.211640  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running
	I1123 09:57:43.211650  300017 system_pods.go:126] duration metric: took 913.260942ms to wait for k8s-apps to be running ...
	I1123 09:57:43.211661  300017 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:43.211722  300017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:43.226055  300017 system_svc.go:56] duration metric: took 14.383207ms WaitForService to wait for kubelet
	I1123 09:57:43.226087  300017 kubeadm.go:587] duration metric: took 12.976401428s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:43.226108  300017 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:43.229492  300017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:43.229524  300017 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:43.229547  300017 node_conditions.go:105] duration metric: took 3.432669ms to run NodePressure ...
	I1123 09:57:43.229560  300017 start.go:242] waiting for startup goroutines ...
	I1123 09:57:43.229570  300017 start.go:247] waiting for cluster config update ...
	I1123 09:57:43.229583  300017 start.go:256] writing updated cluster config ...
	I1123 09:57:43.229975  300017 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:43.235596  300017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:43.243251  300017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.248984  300017 pod_ready.go:94] pod "coredns-66bc5c9577-8dgc7" is "Ready"
	I1123 09:57:43.249015  300017 pod_ready.go:86] duration metric: took 5.729453ms for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.251635  300017 pod_ready.go:83] waiting for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.256613  300017 pod_ready.go:94] pod "etcd-embed-certs-412583" is "Ready"
	I1123 09:57:43.256645  300017 pod_ready.go:86] duration metric: took 4.984583ms for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.259023  300017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.264242  300017 pod_ready.go:94] pod "kube-apiserver-embed-certs-412583" is "Ready"
	I1123 09:57:43.264273  300017 pod_ready.go:86] duration metric: took 5.223434ms for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.311182  300017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.642602  300017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412583" is "Ready"
	I1123 09:57:43.642637  300017 pod_ready.go:86] duration metric: took 331.426321ms for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.843849  300017 pod_ready.go:83] waiting for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.244623  300017 pod_ready.go:94] pod "kube-proxy-wm7k2" is "Ready"
	I1123 09:57:44.244667  300017 pod_ready.go:86] duration metric: took 400.77745ms for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.444056  300017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.843963  300017 pod_ready.go:94] pod "kube-scheduler-embed-certs-412583" is "Ready"
	I1123 09:57:44.843992  300017 pod_ready.go:86] duration metric: took 399.904179ms for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.844006  300017 pod_ready.go:40] duration metric: took 1.608365258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:44.891853  300017 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:44.964864  300017 out.go:179] * Done! kubectl is now configured to use "embed-certs-412583" cluster and "default" namespace by default
	W1123 09:57:41.488122  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	W1123 09:57:43.488201  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	I1123 09:57:43.988019  296642 node_ready.go:49] node "no-preload-309734" is "Ready"
	I1123 09:57:43.988052  296642 node_ready.go:38] duration metric: took 14.003534589s for node "no-preload-309734" to be "Ready" ...
	I1123 09:57:43.988069  296642 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:43.988149  296642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:44.008503  296642 api_server.go:72] duration metric: took 14.434117996s to wait for apiserver process to appear ...
	I1123 09:57:44.008530  296642 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:44.008551  296642 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:57:44.017109  296642 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:57:44.018176  296642 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:44.018200  296642 api_server.go:131] duration metric: took 9.663468ms to wait for apiserver health ...
	I1123 09:57:44.018208  296642 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:44.022287  296642 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:44.022324  296642 system_pods.go:61] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.022351  296642 system_pods.go:61] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.022364  296642 system_pods.go:61] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.022369  296642 system_pods.go:61] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.022375  296642 system_pods.go:61] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.022381  296642 system_pods.go:61] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.022387  296642 system_pods.go:61] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.022397  296642 system_pods.go:61] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.022406  296642 system_pods.go:74] duration metric: took 4.191598ms to wait for pod list to return data ...
	I1123 09:57:44.022421  296642 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:44.025262  296642 default_sa.go:45] found service account: "default"
	I1123 09:57:44.025287  296642 default_sa.go:55] duration metric: took 2.858313ms for default service account to be created ...
	I1123 09:57:44.025300  296642 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:44.028240  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.028269  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.028275  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.028281  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.028285  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.028289  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.028293  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.028296  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.028300  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.028346  296642 retry.go:31] will retry after 283.472429ms: missing components: kube-dns
	I1123 09:57:44.317300  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.317353  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.317361  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.317370  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.317376  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.317382  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.317387  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.317391  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.317397  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.317416  296642 retry.go:31] will retry after 321.7427ms: missing components: kube-dns
	I1123 09:57:44.689277  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.689322  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.689344  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.689353  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.689359  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.689366  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.689370  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.689375  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.689382  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.689411  296642 retry.go:31] will retry after 353.961831ms: missing components: kube-dns
	I1123 09:57:45.048995  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.049060  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.049069  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.049078  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.049084  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.049090  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.049099  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.049104  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.049116  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.049135  296642 retry.go:31] will retry after 412.630882ms: missing components: kube-dns
	I1123 09:57:45.607770  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.607816  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.607826  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.607836  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.607841  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.607847  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.607851  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.607856  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.607873  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.607891  296642 retry.go:31] will retry after 544.365573ms: missing components: kube-dns
	I1123 09:57:41.425584  311138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:57:41.425893  311138 start.go:159] libmachine.API.Create for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:41.425945  311138 client.go:173] LocalClient.Create starting
	I1123 09:57:41.426056  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem
	I1123 09:57:41.426100  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426121  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426185  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem
	I1123 09:57:41.426208  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426217  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426608  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:57:41.445568  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:57:41.445670  311138 network_create.go:284] running [docker network inspect default-k8s-diff-port-696492] to gather additional debugging logs...
	I1123 09:57:41.445697  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492
	W1123 09:57:41.465174  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 returned with exit code 1
	I1123 09:57:41.465216  311138 network_create.go:287] error running [docker network inspect default-k8s-diff-port-696492]: docker network inspect default-k8s-diff-port-696492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-696492 not found
	I1123 09:57:41.465236  311138 network_create.go:289] output of [docker network inspect default-k8s-diff-port-696492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-696492 not found
	
	** /stderr **
	I1123 09:57:41.465403  311138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:41.487255  311138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-de5cba392bb4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8d:f5:88:bc:8b} reservation:<nil>}
	I1123 09:57:41.488105  311138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e2eabbe85d5b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:f4:02:bd:23:31} reservation:<nil>}
	I1123 09:57:41.489037  311138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-22e47e96d08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:9e:83:f9:9f:f6} reservation:<nil>}
	I1123 09:57:41.489614  311138 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4fa988beb7cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:18:12:be:77:f6} reservation:<nil>}
	I1123 09:57:41.492079  311138 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d80820}
	I1123 09:57:41.492121  311138 network_create.go:124] attempt to create docker network default-k8s-diff-port-696492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 09:57:41.492171  311138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 default-k8s-diff-port-696492
	I1123 09:57:41.554538  311138 network_create.go:108] docker network default-k8s-diff-port-696492 192.168.85.0/24 created
	I1123 09:57:41.554588  311138 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-696492" container
	I1123 09:57:41.554664  311138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:57:41.575522  311138 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-696492 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:57:41.598058  311138 oci.go:103] Successfully created a docker volume default-k8s-diff-port-696492
	I1123 09:57:41.598141  311138 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-696492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --entrypoint /usr/bin/test -v default-k8s-diff-port-696492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:57:42.041176  311138 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-696492
	I1123 09:57:42.041254  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:42.041269  311138 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:57:42.041325  311138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:57:46.265821  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:46.265851  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running
	I1123 09:57:46.265856  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:46.265860  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:46.265863  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:46.265868  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:46.265870  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:46.265875  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:46.265879  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running
	I1123 09:57:46.265889  296642 system_pods.go:126] duration metric: took 2.240582653s to wait for k8s-apps to be running ...
	I1123 09:57:46.265903  296642 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:46.265972  296642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:46.283075  296642 system_svc.go:56] duration metric: took 17.161056ms WaitForService to wait for kubelet
	I1123 09:57:46.283105  296642 kubeadm.go:587] duration metric: took 16.70872571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:46.283128  296642 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:46.491444  296642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:46.491473  296642 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:46.491486  296642 node_conditions.go:105] duration metric: took 208.353263ms to run NodePressure ...
	I1123 09:57:46.491509  296642 start.go:242] waiting for startup goroutines ...
	I1123 09:57:46.491520  296642 start.go:247] waiting for cluster config update ...
	I1123 09:57:46.491533  296642 start.go:256] writing updated cluster config ...
	I1123 09:57:46.491804  296642 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:46.498152  296642 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:46.503240  296642 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.508998  296642 pod_ready.go:94] pod "coredns-66bc5c9577-sx25q" is "Ready"
	I1123 09:57:46.509028  296642 pod_ready.go:86] duration metric: took 5.757344ms for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.512072  296642 pod_ready.go:83] waiting for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.517750  296642 pod_ready.go:94] pod "etcd-no-preload-309734" is "Ready"
	I1123 09:57:46.517777  296642 pod_ready.go:86] duration metric: took 5.673234ms for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.520446  296642 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.525480  296642 pod_ready.go:94] pod "kube-apiserver-no-preload-309734" is "Ready"
	I1123 09:57:46.525513  296642 pod_ready.go:86] duration metric: took 5.036877ms for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.528196  296642 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.902790  296642 pod_ready.go:94] pod "kube-controller-manager-no-preload-309734" is "Ready"
	I1123 09:57:46.902815  296642 pod_ready.go:86] duration metric: took 374.588413ms for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.104263  296642 pod_ready.go:83] waiting for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.504876  296642 pod_ready.go:94] pod "kube-proxy-jpvhc" is "Ready"
	I1123 09:57:47.504999  296642 pod_ready.go:86] duration metric: took 400.696383ms for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.706275  296642 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104684  296642 pod_ready.go:94] pod "kube-scheduler-no-preload-309734" is "Ready"
	I1123 09:57:48.104720  296642 pod_ready.go:86] duration metric: took 398.41369ms for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104739  296642 pod_ready.go:40] duration metric: took 1.606531718s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:48.181507  296642 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:48.183959  296642 out.go:179] * Done! kubectl is now configured to use "no-preload-309734" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	8d7f40f8f4e07       56cc512116c8f       8 seconds ago       Running             busybox                   0                   fef27a1a4d0d4       busybox                                          default
	d15093524dcf0       ead0a4a53df89       14 seconds ago      Running             coredns                   0                   1410c58ee49e1       coredns-5dd5756b68-gf5sx                         kube-system
	6188a0a11a558       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   d10f215129879       storage-provisioner                              kube-system
	a1af83bb67492       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   0d60321491712       kindnet-tpvt2                                    kube-system
	e82a6fec044de       ea1030da44aa1       28 seconds ago      Running             kube-proxy                0                   11e7ed694601b       kube-proxy-sgv48                                 kube-system
	1b2964c416267       4be79c38a4bab       50 seconds ago      Running             kube-controller-manager   0                   2cc4143ea8b90       kube-controller-manager-old-k8s-version-709593   kube-system
	33f6ed017ec88       f6f496300a2ae       50 seconds ago      Running             kube-scheduler            0                   11295be3c0583       kube-scheduler-old-k8s-version-709593            kube-system
	9ab267968c030       bb5e0dde9054c       50 seconds ago      Running             kube-apiserver            0                   86d19ce97a6b1       kube-apiserver-old-k8s-version-709593            kube-system
	d4c298d1c8060       73deb9a3f7025       50 seconds ago      Running             etcd                      0                   2f9ec40d5f287       etcd-old-k8s-version-709593                      kube-system
	
	
	==> containerd <==
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.420590122Z" level=info msg="CreateContainer within sandbox \"d10f2151298793071f334a433fb6cfce4b8b35c05f27a6d4e58960cedbf96462\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc\""
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.421304727Z" level=info msg="StartContainer for \"6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc\""
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.423036667Z" level=info msg="connecting to shim 6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc" address="unix:///run/containerd/s/1f0be7d26635bbcb41f6c32b3d2f1385a50ecbc1dec74ce6548e85610e0cefc1" protocol=ttrpc version=3
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.423927224Z" level=info msg="CreateContainer within sandbox \"1410c58ee49e106f41592b5e6ae663765165c9b234249dacefc4e2eccebfec08\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f\""
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.424701663Z" level=info msg="StartContainer for \"d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f\""
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.425764608Z" level=info msg="connecting to shim d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f" address="unix:///run/containerd/s/fe12e30014183b4c11ebd3e6acfbe97fc1992c631d1626cb13faef4fe4d22ee6" protocol=ttrpc version=3
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.488919409Z" level=info msg="StartContainer for \"d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f\" returns successfully"
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.489532054Z" level=info msg="StartContainer for \"6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc\" returns successfully"
	Nov 23 09:57:37 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:37.817959050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bea346d9-0dca-482c-b9f9-7b71741b18d7,Namespace:default,Attempt:0,}"
	Nov 23 09:57:37 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:37.866021477Z" level=info msg="connecting to shim fef27a1a4d0d4d0fd89a702b88e4f10a3d0f81a41d5a766dcd38d6273f063615" address="unix:///run/containerd/s/f66c8e58b533a67c21226ca176913c77f22823731a0ac223ff958c8fefe43b11" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:57:37 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:37.950965400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bea346d9-0dca-482c-b9f9-7b71741b18d7,Namespace:default,Attempt:0,} returns sandbox id \"fef27a1a4d0d4d0fd89a702b88e4f10a3d0f81a41d5a766dcd38d6273f063615\""
	Nov 23 09:57:37 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:37.953294596Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.223204984Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.224183979Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396648"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.226078502Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.228512955Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.229002948Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.275384117s"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.229045171Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.230910353Z" level=info msg="CreateContainer within sandbox \"fef27a1a4d0d4d0fd89a702b88e4f10a3d0f81a41d5a766dcd38d6273f063615\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.242585175Z" level=info msg="Container 8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.253136286Z" level=info msg="CreateContainer within sandbox \"fef27a1a4d0d4d0fd89a702b88e4f10a3d0f81a41d5a766dcd38d6273f063615\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4\""
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.253869141Z" level=info msg="StartContainer for \"8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4\""
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.258087383Z" level=info msg="connecting to shim 8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4" address="unix:///run/containerd/s/f66c8e58b533a67c21226ca176913c77f22823731a0ac223ff958c8fefe43b11" protocol=ttrpc version=3
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.328511725Z" level=info msg="StartContainer for \"8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4\" returns successfully"
	Nov 23 09:57:47 old-k8s-version-709593 containerd[660]: E1123 09:57:47.651496     660 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34931 - 60518 "HINFO IN 7244376839273605299.5052886007572092194. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04020687s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-709593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-709593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-709593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:57:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-709593
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:57:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:57:36 +0000   Sun, 23 Nov 2025 09:56:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:57:36 +0000   Sun, 23 Nov 2025 09:56:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:57:36 +0000   Sun, 23 Nov 2025 09:56:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:57:36 +0000   Sun, 23 Nov 2025 09:57:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-709593
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9e6f0832-18db-4c8d-86e4-20812ea439e5
	  Boot ID:                    e4c4d39b-bebd-4037-9237-26b945dbe084
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-gf5sx                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-old-k8s-version-709593                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-tpvt2                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-709593             250m (3%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-old-k8s-version-709593    200m (2%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-sgv48                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-709593             100m (1%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node old-k8s-version-709593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node old-k8s-version-709593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x7 over 52s)  kubelet          Node old-k8s-version-709593 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-709593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-709593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-709593 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-709593 event: Registered Node old-k8s-version-709593 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-709593 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.288463] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[Nov23 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.195562] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +5.912917] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 c0 1c 98 33 a9 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.002091] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 47 bd bf 96 57 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[  +4.460318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 85 b9 91 f8 a4 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +2.904694] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	[Nov23 09:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 48 bf 8b d1 fc 08 06
	[  +0.000931] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	
	
	==> etcd [d4c298d1c8060139c5bb973acee87dc3fbc6b6454b9e3c8ebe9c6b86a2e5a7b8] <==
	{"level":"info","ts":"2025-11-23T09:56:58.59753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T09:56:58.597864Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:56:58.597974Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:56:58.598004Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:56:58.599014Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-23T09:57:01.971736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.487229ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424543 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:monitoring\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:monitoring\" value_size:573 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:01.971868Z","caller":"traceutil/trace.go:171","msg":"trace[1367842110] transaction","detail":"{read_only:false; response_revision:112; number_of_response:1; }","duration":"185.333295ms","start":"2025-11-23T09:57:01.786515Z","end":"2025-11-23T09:57:01.971849Z","steps":["trace[1367842110] 'process raft request'  (duration: 59.969834ms)","trace[1367842110] 'compare'  (duration: 124.335128ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:02.204167Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.409698ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424553 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/view\" value_size:673 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:02.204261Z","caller":"traceutil/trace.go:171","msg":"trace[1142240257] transaction","detail":"{read_only:false; response_revision:117; number_of_response:1; }","duration":"141.084345ms","start":"2025-11-23T09:57:02.063163Z","end":"2025-11-23T09:57:02.204247Z","steps":["trace[1142240257] 'compare'  (duration: 132.298203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:02.49574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.58211ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424557 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-edit\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-edit\" value_size:1957 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:02.495841Z","caller":"traceutil/trace.go:171","msg":"trace[1763507131] transaction","detail":"{read_only:false; response_revision:119; number_of_response:1; }","duration":"249.990542ms","start":"2025-11-23T09:57:02.245837Z","end":"2025-11-23T09:57:02.495828Z","steps":["trace[1763507131] 'process raft request'  (duration: 121.258106ms)","trace[1763507131] 'compare'  (duration: 128.446744ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:02.811736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.743867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424559 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-view\" value_size:1862 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:02.811827Z","caller":"traceutil/trace.go:171","msg":"trace[334752838] linearizableReadLoop","detail":"{readStateIndex:125; appliedIndex:124; }","duration":"197.624876ms","start":"2025-11-23T09:57:02.614187Z","end":"2025-11-23T09:57:02.811812Z","steps":["trace[334752838] 'read index received'  (duration: 54.776357ms)","trace[334752838] 'applied index is now lower than readState.Index'  (duration: 142.846972ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:57:02.811874Z","caller":"traceutil/trace.go:171","msg":"trace[577911190] transaction","detail":"{read_only:false; response_revision:120; number_of_response:1; }","duration":"309.546043ms","start":"2025-11-23T09:57:02.502295Z","end":"2025-11-23T09:57:02.811841Z","steps":["trace[577911190] 'process raft request'  (duration: 166.630437ms)","trace[577911190] 'compare'  (duration: 142.557878ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:02.811926Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.752655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T09:57:02.811961Z","caller":"traceutil/trace.go:171","msg":"trace[450821894] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:120; }","duration":"197.79258ms","start":"2025-11-23T09:57:02.614154Z","end":"2025-11-23T09:57:02.811947Z","steps":["trace[450821894] 'agreement among raft nodes before linearized reading'  (duration: 197.694344ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:02.812003Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-23T09:57:02.50227Z","time spent":"309.683301ms","remote":"127.0.0.1:39468","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1917,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-view\" value_size:1862 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T09:57:03.126521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.304764ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424563 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:heapster\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:heapster\" value_size:579 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:03.126599Z","caller":"traceutil/trace.go:171","msg":"trace[1403684060] transaction","detail":"{read_only:false; response_revision:121; number_of_response:1; }","duration":"309.884743ms","start":"2025-11-23T09:57:02.816704Z","end":"2025-11-23T09:57:03.126589Z","steps":["trace[1403684060] 'process raft request'  (duration: 124.45761ms)","trace[1403684060] 'compare'  (duration: 185.120538ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:03.126635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-23T09:57:02.816683Z","time spent":"309.941015ms","remote":"127.0.0.1:39468","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":625,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:heapster\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:heapster\" value_size:579 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T09:57:03.378154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.573425ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424567 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:node-problem-detector\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:node-problem-detector\" value_size:583 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:03.37825Z","caller":"traceutil/trace.go:171","msg":"trace[407529311] transaction","detail":"{read_only:false; response_revision:123; number_of_response:1; }","duration":"236.959494ms","start":"2025-11-23T09:57:03.141275Z","end":"2025-11-23T09:57:03.378235Z","steps":["trace[407529311] 'process raft request'  (duration: 119.236514ms)","trace[407529311] 'compare'  (duration: 117.440472ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:57:03.488901Z","caller":"traceutil/trace.go:171","msg":"trace[331049729] transaction","detail":"{read_only:false; response_revision:124; number_of_response:1; }","duration":"105.829119ms","start":"2025-11-23T09:57:03.38305Z","end":"2025-11-23T09:57:03.488879Z","steps":["trace[331049729] 'process raft request'  (duration: 105.359949ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:03.685992Z","caller":"traceutil/trace.go:171","msg":"trace[1238052414] transaction","detail":"{read_only:false; response_revision:127; number_of_response:1; }","duration":"180.587913ms","start":"2025-11-23T09:57:03.505382Z","end":"2025-11-23T09:57:03.68597Z","steps":["trace[1238052414] 'process raft request'  (duration: 128.699733ms)","trace[1238052414] 'compare'  (duration: 51.773911ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:57:44.684831Z","caller":"traceutil/trace.go:171","msg":"trace[671402052] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"110.153636ms","start":"2025-11-23T09:57:44.574655Z","end":"2025-11-23T09:57:44.684809Z","steps":["trace[671402052] 'process raft request'  (duration: 110.003906ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:57:49 up 40 min,  0 user,  load average: 5.55, 4.20, 2.64
	Linux old-k8s-version-709593 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a1af83bb6749287f8df2adaeff4c43c5820f5194cb24f7fe3eb5ef134893d93c] <==
	I1123 09:57:23.601786       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:57:23.602109       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:57:23.602284       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:57:23.602304       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:57:23.602318       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:57:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:57:23.855098       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:57:23.855140       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:57:23.855154       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:57:23.900801       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:57:24.355697       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:57:24.355735       1 metrics.go:72] Registering metrics
	I1123 09:57:24.355844       1 controller.go:711] "Syncing nftables rules"
	I1123 09:57:33.855972       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:57:33.856030       1 main.go:301] handling current node
	I1123 09:57:43.856054       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:57:43.856111       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9ab267968c030e0a3bce6b123e59cf0e26705c3742842d1fe84461463f48a663] <==
	I1123 09:57:00.606586       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 09:57:00.606625       1 aggregator.go:166] initial CRD sync complete...
	I1123 09:57:00.606634       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 09:57:00.606641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:57:00.606650       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:57:00.608306       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 09:57:00.609050       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 09:57:00.624076       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:57:00.649174       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 09:57:01.610779       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:57:01.702685       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:57:01.702703       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:57:04.338662       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:57:04.416324       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:57:04.524354       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:57:04.538023       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 09:57:04.540122       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 09:57:04.546988       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:57:04.575545       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 09:57:05.959109       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 09:57:05.975157       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:57:05.986661       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 09:57:17.926455       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 09:57:18.460236       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:57:47.744877       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.76.2:47470->192.168.76.2:10250: write: connection reset by peer
	
	
	==> kube-controller-manager [1b2964c41626762d3beb765fa131cc83c8eafa60068157afab3d1e775a761750] <==
	I1123 09:57:18.051120       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 09:57:18.052924       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-709593" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1123 09:57:18.132109       1 shared_informer.go:318] Caches are synced for attach detach
	I1123 09:57:18.349828       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-tndwj"
	I1123 09:57:18.372449       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gf5sx"
	I1123 09:57:18.406026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="452.070013ms"
	I1123 09:57:18.463224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.127396ms"
	I1123 09:57:18.483794       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sgv48"
	I1123 09:57:18.483871       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:57:18.504473       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tpvt2"
	I1123 09:57:18.560131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.571025ms"
	I1123 09:57:18.560538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="230.617µs"
	I1123 09:57:18.562358       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:57:18.562385       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 09:57:19.789485       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 09:57:19.808843       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-tndwj"
	I1123 09:57:19.823673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.107806ms"
	I1123 09:57:19.833064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.315043ms"
	I1123 09:57:19.833185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.73µs"
	I1123 09:57:33.949212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.096µs"
	I1123 09:57:33.981566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.706µs"
	I1123 09:57:35.176726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="136.892µs"
	I1123 09:57:35.214616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.894482ms"
	I1123 09:57:35.214767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.972µs"
	I1123 09:57:38.010283       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [e82a6fec044de994c043f2f9c5656e0c2a71e8e480ed8f7cca948de66ed51059] <==
	I1123 09:57:20.277594       1 server_others.go:69] "Using iptables proxy"
	I1123 09:57:20.292272       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 09:57:20.339595       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:57:20.344426       1 server_others.go:152] "Using iptables Proxier"
	I1123 09:57:20.344681       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 09:57:20.344815       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 09:57:20.344909       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 09:57:20.345726       1 server.go:846] "Version info" version="v1.28.0"
	I1123 09:57:20.345900       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:57:20.347106       1 config.go:188] "Starting service config controller"
	I1123 09:57:20.350153       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 09:57:20.349625       1 config.go:97] "Starting endpoint slice config controller"
	I1123 09:57:20.350452       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 09:57:20.350106       1 config.go:315] "Starting node config controller"
	I1123 09:57:20.350583       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 09:57:20.450547       1 shared_informer.go:318] Caches are synced for service config
	I1123 09:57:20.450714       1 shared_informer.go:318] Caches are synced for node config
	I1123 09:57:20.450744       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [33f6ed017ec882589a089aad6a009c657f1fc80298864259b48138233e264c91] <==
	W1123 09:57:01.700971       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 09:57:01.701017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 09:57:01.704770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 09:57:01.704814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 09:57:01.752559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 09:57:01.752596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 09:57:01.981985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 09:57:01.982024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 09:57:01.983872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 09:57:01.983905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 09:57:02.057453       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 09:57:02.057498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 09:57:02.144948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 09:57:02.145025       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 09:57:03.483078       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 09:57:03.483126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 09:57:03.561961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 09:57:03.562012       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 09:57:03.808694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 09:57:03.808744       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 09:57:03.860531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 09:57:03.860576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 09:57:03.972432       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 09:57:03.972478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1123 09:57:04.567087       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: W1123 09:57:18.547160    1519 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-709593" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-709593' and this object
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: E1123 09:57:18.547223    1519 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-709593" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-709593' and this object
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709145    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz9pq\" (UniqueName: \"kubernetes.io/projected/f5d963bd-a2f2-44d2-969c-d219c55aba33-kube-api-access-dz9pq\") pod \"kube-proxy-sgv48\" (UID: \"f5d963bd-a2f2-44d2-969c-d219c55aba33\") " pod="kube-system/kube-proxy-sgv48"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709218    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fd3daece-c28b-4efa-ae53-16c16790e5be-cni-cfg\") pod \"kindnet-tpvt2\" (UID: \"fd3daece-c28b-4efa-ae53-16c16790e5be\") " pod="kube-system/kindnet-tpvt2"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709250    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd3daece-c28b-4efa-ae53-16c16790e5be-xtables-lock\") pod \"kindnet-tpvt2\" (UID: \"fd3daece-c28b-4efa-ae53-16c16790e5be\") " pod="kube-system/kindnet-tpvt2"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709281    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6p4v\" (UniqueName: \"kubernetes.io/projected/fd3daece-c28b-4efa-ae53-16c16790e5be-kube-api-access-c6p4v\") pod \"kindnet-tpvt2\" (UID: \"fd3daece-c28b-4efa-ae53-16c16790e5be\") " pod="kube-system/kindnet-tpvt2"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709316    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5d963bd-a2f2-44d2-969c-d219c55aba33-lib-modules\") pod \"kube-proxy-sgv48\" (UID: \"f5d963bd-a2f2-44d2-969c-d219c55aba33\") " pod="kube-system/kube-proxy-sgv48"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709389    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd3daece-c28b-4efa-ae53-16c16790e5be-lib-modules\") pod \"kindnet-tpvt2\" (UID: \"fd3daece-c28b-4efa-ae53-16c16790e5be\") " pod="kube-system/kindnet-tpvt2"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709422    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f5d963bd-a2f2-44d2-969c-d219c55aba33-kube-proxy\") pod \"kube-proxy-sgv48\" (UID: \"f5d963bd-a2f2-44d2-969c-d219c55aba33\") " pod="kube-system/kube-proxy-sgv48"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709454    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5d963bd-a2f2-44d2-969c-d219c55aba33-xtables-lock\") pod \"kube-proxy-sgv48\" (UID: \"f5d963bd-a2f2-44d2-969c-d219c55aba33\") " pod="kube-system/kube-proxy-sgv48"
	Nov 23 09:57:24 old-k8s-version-709593 kubelet[1519]: I1123 09:57:24.152873    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sgv48" podStartSLOduration=6.152803535 podCreationTimestamp="2025-11-23 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:21.2206232 +0000 UTC m=+15.292351138" watchObservedRunningTime="2025-11-23 09:57:24.152803535 +0000 UTC m=+18.224531435"
	Nov 23 09:57:24 old-k8s-version-709593 kubelet[1519]: I1123 09:57:24.153064    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-tpvt2" podStartSLOduration=2.534840269 podCreationTimestamp="2025-11-23 09:57:18 +0000 UTC" firstStartedPulling="2025-11-23 09:57:19.547788823 +0000 UTC m=+13.619516716" lastFinishedPulling="2025-11-23 09:57:23.165974087 +0000 UTC m=+17.237701980" observedRunningTime="2025-11-23 09:57:24.152485675 +0000 UTC m=+18.224213576" watchObservedRunningTime="2025-11-23 09:57:24.153025533 +0000 UTC m=+18.224753438"
	Nov 23 09:57:33 old-k8s-version-709593 kubelet[1519]: I1123 09:57:33.920548    1519 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 09:57:33 old-k8s-version-709593 kubelet[1519]: I1123 09:57:33.948876    1519 topology_manager.go:215] "Topology Admit Handler" podUID="9a493920-3739-4eb9-8426-3590a8f2ee51" podNamespace="kube-system" podName="coredns-5dd5756b68-gf5sx"
	Nov 23 09:57:33 old-k8s-version-709593 kubelet[1519]: I1123 09:57:33.949059    1519 topology_manager.go:215] "Topology Admit Handler" podUID="ba58926e-fdf3-4750-b44d-7c94a027737e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 09:57:34 old-k8s-version-709593 kubelet[1519]: I1123 09:57:34.123178    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724lb\" (UniqueName: \"kubernetes.io/projected/ba58926e-fdf3-4750-b44d-7c94a027737e-kube-api-access-724lb\") pod \"storage-provisioner\" (UID: \"ba58926e-fdf3-4750-b44d-7c94a027737e\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:34 old-k8s-version-709593 kubelet[1519]: I1123 09:57:34.123243    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ba58926e-fdf3-4750-b44d-7c94a027737e-tmp\") pod \"storage-provisioner\" (UID: \"ba58926e-fdf3-4750-b44d-7c94a027737e\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:34 old-k8s-version-709593 kubelet[1519]: I1123 09:57:34.123297    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzx7\" (UniqueName: \"kubernetes.io/projected/9a493920-3739-4eb9-8426-3590a8f2ee51-kube-api-access-5rzx7\") pod \"coredns-5dd5756b68-gf5sx\" (UID: \"9a493920-3739-4eb9-8426-3590a8f2ee51\") " pod="kube-system/coredns-5dd5756b68-gf5sx"
	Nov 23 09:57:34 old-k8s-version-709593 kubelet[1519]: I1123 09:57:34.123357    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a493920-3739-4eb9-8426-3590a8f2ee51-config-volume\") pod \"coredns-5dd5756b68-gf5sx\" (UID: \"9a493920-3739-4eb9-8426-3590a8f2ee51\") " pod="kube-system/coredns-5dd5756b68-gf5sx"
	Nov 23 09:57:35 old-k8s-version-709593 kubelet[1519]: I1123 09:57:35.176230    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gf5sx" podStartSLOduration=17.176168603 podCreationTimestamp="2025-11-23 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:35.175754843 +0000 UTC m=+29.247482743" watchObservedRunningTime="2025-11-23 09:57:35.176168603 +0000 UTC m=+29.247896503"
	Nov 23 09:57:35 old-k8s-version-709593 kubelet[1519]: I1123 09:57:35.204836    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.204788689 podCreationTimestamp="2025-11-23 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:35.19026469 +0000 UTC m=+29.261992589" watchObservedRunningTime="2025-11-23 09:57:35.204788689 +0000 UTC m=+29.276516592"
	Nov 23 09:57:37 old-k8s-version-709593 kubelet[1519]: I1123 09:57:37.507262    1519 topology_manager.go:215] "Topology Admit Handler" podUID="bea346d9-0dca-482c-b9f9-7b71741b18d7" podNamespace="default" podName="busybox"
	Nov 23 09:57:37 old-k8s-version-709593 kubelet[1519]: I1123 09:57:37.646410    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj5kg\" (UniqueName: \"kubernetes.io/projected/bea346d9-0dca-482c-b9f9-7b71741b18d7-kube-api-access-pj5kg\") pod \"busybox\" (UID: \"bea346d9-0dca-482c-b9f9-7b71741b18d7\") " pod="default/busybox"
	Nov 23 09:57:41 old-k8s-version-709593 kubelet[1519]: I1123 09:57:41.192410    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.9155870259999999 podCreationTimestamp="2025-11-23 09:57:37 +0000 UTC" firstStartedPulling="2025-11-23 09:57:37.952685082 +0000 UTC m=+32.024412966" lastFinishedPulling="2025-11-23 09:57:40.229447793 +0000 UTC m=+34.301175679" observedRunningTime="2025-11-23 09:57:41.192028507 +0000 UTC m=+35.263756408" watchObservedRunningTime="2025-11-23 09:57:41.192349739 +0000 UTC m=+35.264077634"
	Nov 23 09:57:47 old-k8s-version-709593 kubelet[1519]: E1123 09:57:47.744109    1519 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 192.168.76.2:34062->192.168.76.2:10010: write tcp 192.168.76.2:34062->192.168.76.2:10010: write: broken pipe
	
	
	==> storage-provisioner [6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc] <==
	I1123 09:57:34.497639       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:57:34.510426       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:57:34.510517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 09:57:34.519430       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:57:34.519625       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-709593_09fc0e4b-1f89-47c2-90c6-e8921583fe8f!
	I1123 09:57:34.522696       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89d02a34-1ced-4051-82ca-0198f46f6d6a", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-709593_09fc0e4b-1f89-47c2-90c6-e8921583fe8f became leader
	I1123 09:57:34.619835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-709593_09fc0e4b-1f89-47c2-90c6-e8921583fe8f!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-709593 -n old-k8s-version-709593
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-709593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-709593
helpers_test.go:243: (dbg) docker inspect old-k8s-version-709593:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384",
	        "Created": "2025-11-23T09:56:47.666891207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294280,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:56:47.720935343Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384/hostname",
	        "HostsPath": "/var/lib/docker/containers/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384/hosts",
	        "LogPath": "/var/lib/docker/containers/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384/29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384-json.log",
	        "Name": "/old-k8s-version-709593",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-709593:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-709593",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "29cb528aee84df4277faf7afff19daffc07e3b9a021296ff004f8b42489e8384",
	                "LowerDir": "/var/lib/docker/overlay2/ea62ac2e144b45f2284ed569ef537390326f82b0cb3d40e4d46e0ff286b7eb90-init/diff:/var/lib/docker/overlay2/c80a0dfdb81b7753b0a82e2bc6458805cbbad0a9ce5819c63e1d9b7b71ba226c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ea62ac2e144b45f2284ed569ef537390326f82b0cb3d40e4d46e0ff286b7eb90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ea62ac2e144b45f2284ed569ef537390326f82b0cb3d40e4d46e0ff286b7eb90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ea62ac2e144b45f2284ed569ef537390326f82b0cb3d40e4d46e0ff286b7eb90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-709593",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-709593/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-709593",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-709593",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-709593",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b544aba317fcf40d3e61edbec3240f39587be7e914d5c21fc69a6535b296b152",
	            "SandboxKey": "/var/run/docker/netns/b544aba317fc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-709593": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4fa988beb7cda350f0c11b822dcc90801b7cc48baa23c5c851d275a8d3ed42f8",
	                    "EndpointID": "da8f042fa74ebc4420b7404b4cac4144f9e37e8a91e96eb145a8c67dcfe76dd3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "76:bc:b6:48:41:0f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-709593",
	                        "29cb528aee84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-709593 -n old-k8s-version-709593
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-709593 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-709593 logs -n 25: (1.209220437s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-676928 sudo systemctl cat kubelet --no-pager                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /var/lib/kubelet/config.yaml                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status docker --all --full --no-pager                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat docker --no-pager                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/docker/daemon.json                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo docker system info                                                                                                                                       │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl status cri-docker --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat cri-docker --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cri-dockerd --version                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status containerd --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl cat containerd --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /lib/systemd/system/containerd.service                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/containerd/config.toml                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo containerd config dump                                                                                                                                   │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status crio --all --full --no-pager                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat crio --no-pager                                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                  │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo crio config                                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p bridge-676928                                                                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p disable-driver-mounts-178820                                                                                                                                                │ disable-driver-mounts-178820 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ start   │ -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ default-k8s-diff-port-696492 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:57:41
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:57:41.194019  311138 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:57:41.194298  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194308  311138 out.go:374] Setting ErrFile to fd 2...
	I1123 09:57:41.194312  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194606  311138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:57:41.195144  311138 out.go:368] Setting JSON to false
	I1123 09:57:41.196591  311138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2400,"bootTime":1763889461,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:57:41.196668  311138 start.go:143] virtualization: kvm guest
	I1123 09:57:41.199167  311138 out.go:179] * [default-k8s-diff-port-696492] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:57:41.201043  311138 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:57:41.201094  311138 notify.go:221] Checking for updates...
	I1123 09:57:41.204382  311138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:57:41.206017  311138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:57:41.207959  311138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:57:41.209794  311138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:57:41.211809  311138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:57:41.214009  311138 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214105  311138 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214180  311138 config.go:182] Loaded profile config "old-k8s-version-709593": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 09:57:41.214271  311138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:57:41.241306  311138 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:57:41.241474  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.312013  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.299959199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.312116  311138 docker.go:319] overlay module found
	I1123 09:57:41.314243  311138 out.go:179] * Using the docker driver based on user configuration
	I1123 09:57:41.316002  311138 start.go:309] selected driver: docker
	I1123 09:57:41.316024  311138 start.go:927] validating driver "docker" against <nil>
	I1123 09:57:41.316037  311138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:57:41.316751  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.385595  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.373759534 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.385794  311138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:57:41.386023  311138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:41.388087  311138 out.go:179] * Using Docker driver with root privileges
	I1123 09:57:41.389651  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:41.389725  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:41.389738  311138 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:57:41.389816  311138 start.go:353] cluster config:
	{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:41.391556  311138 out.go:179] * Starting "default-k8s-diff-port-696492" primary control-plane node in "default-k8s-diff-port-696492" cluster
	I1123 09:57:41.392982  311138 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:57:41.394476  311138 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:57:41.395978  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:41.396028  311138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 09:57:41.396036  311138 cache.go:65] Caching tarball of preloaded images
	I1123 09:57:41.396075  311138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:57:41.396157  311138 preload.go:238] Found /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 09:57:41.396175  311138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:57:41.396320  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:41.396374  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json: {Name:mk3b81d8fd8561a54828649e3e510565221995b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:41.422089  311138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:57:41.422112  311138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:57:41.422133  311138 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:57:41.422177  311138 start.go:360] acquireMachinesLock for default-k8s-diff-port-696492: {Name:mkc8ee83ed2b7a995e355ddec223dfeea233bbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:57:41.422316  311138 start.go:364] duration metric: took 112.296µs to acquireMachinesLock for "default-k8s-diff-port-696492"
	I1123 09:57:41.422500  311138 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:57:41.422632  311138 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:57:37.251564  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	W1123 09:57:39.751746  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	I1123 09:57:42.255256  300017 node_ready.go:49] node "embed-certs-412583" is "Ready"
	I1123 09:57:42.255291  300017 node_ready.go:38] duration metric: took 11.507766088s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:57:42.255310  300017 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:42.255471  300017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:42.277737  300017 api_server.go:72] duration metric: took 12.028046262s to wait for apiserver process to appear ...
	I1123 09:57:42.277770  300017 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:42.277792  300017 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:57:42.285468  300017 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:57:42.287274  300017 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:42.287395  300017 api_server.go:131] duration metric: took 9.61454ms to wait for apiserver health ...
	I1123 09:57:42.287422  300017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:42.294433  300017 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:42.294478  300017 system_pods.go:61] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.294486  300017 system_pods.go:61] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.294493  300017 system_pods.go:61] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.294499  300017 system_pods.go:61] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.294505  300017 system_pods.go:61] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.294510  300017 system_pods.go:61] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.294515  300017 system_pods.go:61] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.294526  300017 system_pods.go:61] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.294539  300017 system_pods.go:74] duration metric: took 7.098728ms to wait for pod list to return data ...
	I1123 09:57:42.294549  300017 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:42.298321  300017 default_sa.go:45] found service account: "default"
	I1123 09:57:42.298368  300017 default_sa.go:55] duration metric: took 3.811774ms for default service account to be created ...
	I1123 09:57:42.298382  300017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:42.302807  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.302871  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.302887  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.302896  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.302903  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.302927  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.302937  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.302943  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.302954  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.303049  300017 retry.go:31] will retry after 268.599682ms: missing components: kube-dns
	I1123 09:57:42.577490  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.577531  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.577541  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.577550  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.577557  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.577563  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.577568  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.577573  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.577581  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.577600  300017 retry.go:31] will retry after 240.156475ms: missing components: kube-dns
	I1123 09:57:42.822131  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.822171  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.822177  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.822182  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.822186  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.822190  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.822194  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.822197  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.822202  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.822216  300017 retry.go:31] will retry after 383.926777ms: missing components: kube-dns
	I1123 09:57:43.211532  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:43.211575  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running
	I1123 09:57:43.211585  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:43.211592  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:43.211600  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:43.211608  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:43.211624  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:43.211635  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:43.211640  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running
	I1123 09:57:43.211650  300017 system_pods.go:126] duration metric: took 913.260942ms to wait for k8s-apps to be running ...
	I1123 09:57:43.211661  300017 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:43.211722  300017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:43.226055  300017 system_svc.go:56] duration metric: took 14.383207ms WaitForService to wait for kubelet
	I1123 09:57:43.226087  300017 kubeadm.go:587] duration metric: took 12.976401428s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:43.226108  300017 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:43.229492  300017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:43.229524  300017 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:43.229547  300017 node_conditions.go:105] duration metric: took 3.432669ms to run NodePressure ...
	I1123 09:57:43.229560  300017 start.go:242] waiting for startup goroutines ...
	I1123 09:57:43.229570  300017 start.go:247] waiting for cluster config update ...
	I1123 09:57:43.229583  300017 start.go:256] writing updated cluster config ...
	I1123 09:57:43.229975  300017 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:43.235596  300017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:43.243251  300017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.248984  300017 pod_ready.go:94] pod "coredns-66bc5c9577-8dgc7" is "Ready"
	I1123 09:57:43.249015  300017 pod_ready.go:86] duration metric: took 5.729453ms for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.251635  300017 pod_ready.go:83] waiting for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.256613  300017 pod_ready.go:94] pod "etcd-embed-certs-412583" is "Ready"
	I1123 09:57:43.256645  300017 pod_ready.go:86] duration metric: took 4.984583ms for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.259023  300017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.264242  300017 pod_ready.go:94] pod "kube-apiserver-embed-certs-412583" is "Ready"
	I1123 09:57:43.264273  300017 pod_ready.go:86] duration metric: took 5.223434ms for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.311182  300017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.642602  300017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412583" is "Ready"
	I1123 09:57:43.642637  300017 pod_ready.go:86] duration metric: took 331.426321ms for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.843849  300017 pod_ready.go:83] waiting for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.244623  300017 pod_ready.go:94] pod "kube-proxy-wm7k2" is "Ready"
	I1123 09:57:44.244667  300017 pod_ready.go:86] duration metric: took 400.77745ms for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.444056  300017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.843963  300017 pod_ready.go:94] pod "kube-scheduler-embed-certs-412583" is "Ready"
	I1123 09:57:44.843992  300017 pod_ready.go:86] duration metric: took 399.904179ms for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.844006  300017 pod_ready.go:40] duration metric: took 1.608365258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:44.891853  300017 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:44.964864  300017 out.go:179] * Done! kubectl is now configured to use "embed-certs-412583" cluster and "default" namespace by default
	W1123 09:57:41.488122  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	W1123 09:57:43.488201  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	I1123 09:57:43.988019  296642 node_ready.go:49] node "no-preload-309734" is "Ready"
	I1123 09:57:43.988052  296642 node_ready.go:38] duration metric: took 14.003534589s for node "no-preload-309734" to be "Ready" ...
	I1123 09:57:43.988069  296642 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:43.988149  296642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:44.008503  296642 api_server.go:72] duration metric: took 14.434117996s to wait for apiserver process to appear ...
	I1123 09:57:44.008530  296642 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:44.008551  296642 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:57:44.017109  296642 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:57:44.018176  296642 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:44.018200  296642 api_server.go:131] duration metric: took 9.663468ms to wait for apiserver health ...
	I1123 09:57:44.018208  296642 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:44.022287  296642 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:44.022324  296642 system_pods.go:61] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.022351  296642 system_pods.go:61] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.022364  296642 system_pods.go:61] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.022369  296642 system_pods.go:61] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.022375  296642 system_pods.go:61] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.022381  296642 system_pods.go:61] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.022387  296642 system_pods.go:61] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.022397  296642 system_pods.go:61] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.022406  296642 system_pods.go:74] duration metric: took 4.191598ms to wait for pod list to return data ...
	I1123 09:57:44.022421  296642 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:44.025262  296642 default_sa.go:45] found service account: "default"
	I1123 09:57:44.025287  296642 default_sa.go:55] duration metric: took 2.858313ms for default service account to be created ...
	I1123 09:57:44.025300  296642 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:44.028240  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.028269  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.028275  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.028281  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.028285  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.028289  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.028293  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.028296  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.028300  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.028346  296642 retry.go:31] will retry after 283.472429ms: missing components: kube-dns
	I1123 09:57:44.317300  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.317353  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.317361  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.317370  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.317376  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.317382  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.317387  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.317391  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.317397  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.317416  296642 retry.go:31] will retry after 321.7427ms: missing components: kube-dns
	I1123 09:57:44.689277  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.689322  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.689344  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.689353  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.689359  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.689366  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.689370  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.689375  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.689382  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.689411  296642 retry.go:31] will retry after 353.961831ms: missing components: kube-dns
	I1123 09:57:45.048995  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.049060  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.049069  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.049078  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.049084  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.049090  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.049099  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.049104  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.049116  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.049135  296642 retry.go:31] will retry after 412.630882ms: missing components: kube-dns
	I1123 09:57:45.607770  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.607816  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.607826  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.607836  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.607841  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.607847  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.607851  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.607856  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.607873  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.607891  296642 retry.go:31] will retry after 544.365573ms: missing components: kube-dns
	I1123 09:57:41.425584  311138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:57:41.425893  311138 start.go:159] libmachine.API.Create for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:41.425945  311138 client.go:173] LocalClient.Create starting
	I1123 09:57:41.426056  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem
	I1123 09:57:41.426100  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426121  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426185  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem
	I1123 09:57:41.426208  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426217  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426608  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:57:41.445568  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:57:41.445670  311138 network_create.go:284] running [docker network inspect default-k8s-diff-port-696492] to gather additional debugging logs...
	I1123 09:57:41.445697  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492
	W1123 09:57:41.465174  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 returned with exit code 1
	I1123 09:57:41.465216  311138 network_create.go:287] error running [docker network inspect default-k8s-diff-port-696492]: docker network inspect default-k8s-diff-port-696492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-696492 not found
	I1123 09:57:41.465236  311138 network_create.go:289] output of [docker network inspect default-k8s-diff-port-696492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-696492 not found
	
	** /stderr **
	I1123 09:57:41.465403  311138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:41.487255  311138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-de5cba392bb4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8d:f5:88:bc:8b} reservation:<nil>}
	I1123 09:57:41.488105  311138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e2eabbe85d5b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:f4:02:bd:23:31} reservation:<nil>}
	I1123 09:57:41.489037  311138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-22e47e96d08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:9e:83:f9:9f:f6} reservation:<nil>}
	I1123 09:57:41.489614  311138 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4fa988beb7cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:18:12:be:77:f6} reservation:<nil>}
	I1123 09:57:41.492079  311138 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d80820}
	I1123 09:57:41.492121  311138 network_create.go:124] attempt to create docker network default-k8s-diff-port-696492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 09:57:41.492171  311138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 default-k8s-diff-port-696492
	I1123 09:57:41.554538  311138 network_create.go:108] docker network default-k8s-diff-port-696492 192.168.85.0/24 created
	I1123 09:57:41.554588  311138 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-696492" container
	I1123 09:57:41.554664  311138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:57:41.575522  311138 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-696492 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:57:41.598058  311138 oci.go:103] Successfully created a docker volume default-k8s-diff-port-696492
	I1123 09:57:41.598141  311138 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-696492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --entrypoint /usr/bin/test -v default-k8s-diff-port-696492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:57:42.041176  311138 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-696492
	I1123 09:57:42.041254  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:42.041269  311138 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:57:42.041325  311138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:57:46.265821  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:46.265851  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running
	I1123 09:57:46.265856  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:46.265860  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:46.265863  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:46.265868  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:46.265870  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:46.265875  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:46.265879  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running
	I1123 09:57:46.265889  296642 system_pods.go:126] duration metric: took 2.240582653s to wait for k8s-apps to be running ...
	I1123 09:57:46.265903  296642 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:46.265972  296642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:46.283075  296642 system_svc.go:56] duration metric: took 17.161056ms WaitForService to wait for kubelet
	I1123 09:57:46.283105  296642 kubeadm.go:587] duration metric: took 16.70872571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:46.283128  296642 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:46.491444  296642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:46.491473  296642 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:46.491486  296642 node_conditions.go:105] duration metric: took 208.353263ms to run NodePressure ...
	I1123 09:57:46.491509  296642 start.go:242] waiting for startup goroutines ...
	I1123 09:57:46.491520  296642 start.go:247] waiting for cluster config update ...
	I1123 09:57:46.491533  296642 start.go:256] writing updated cluster config ...
	I1123 09:57:46.491804  296642 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:46.498152  296642 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:46.503240  296642 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.508998  296642 pod_ready.go:94] pod "coredns-66bc5c9577-sx25q" is "Ready"
	I1123 09:57:46.509028  296642 pod_ready.go:86] duration metric: took 5.757344ms for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.512072  296642 pod_ready.go:83] waiting for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.517750  296642 pod_ready.go:94] pod "etcd-no-preload-309734" is "Ready"
	I1123 09:57:46.517777  296642 pod_ready.go:86] duration metric: took 5.673234ms for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.520446  296642 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.525480  296642 pod_ready.go:94] pod "kube-apiserver-no-preload-309734" is "Ready"
	I1123 09:57:46.525513  296642 pod_ready.go:86] duration metric: took 5.036877ms for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.528196  296642 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.902790  296642 pod_ready.go:94] pod "kube-controller-manager-no-preload-309734" is "Ready"
	I1123 09:57:46.902815  296642 pod_ready.go:86] duration metric: took 374.588413ms for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.104263  296642 pod_ready.go:83] waiting for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.504876  296642 pod_ready.go:94] pod "kube-proxy-jpvhc" is "Ready"
	I1123 09:57:47.504999  296642 pod_ready.go:86] duration metric: took 400.696383ms for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.706275  296642 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104684  296642 pod_ready.go:94] pod "kube-scheduler-no-preload-309734" is "Ready"
	I1123 09:57:48.104720  296642 pod_ready.go:86] duration metric: took 398.41369ms for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104739  296642 pod_ready.go:40] duration metric: took 1.606531718s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:48.181507  296642 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:48.183959  296642 out.go:179] * Done! kubectl is now configured to use "no-preload-309734" cluster and "default" namespace by default
	I1123 09:57:46.740944  311138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.699532205s)
	I1123 09:57:46.741010  311138 kic.go:203] duration metric: took 4.699734046s to extract preloaded images to volume ...
	W1123 09:57:46.741179  311138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:57:46.741234  311138 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:57:46.741304  311138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:57:46.807009  311138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-696492 --name default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --network default-k8s-diff-port-696492 --ip 192.168.85.2 --volume default-k8s-diff-port-696492:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:57:47.199589  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Running}}
	I1123 09:57:47.220655  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.242623  311138 cli_runner.go:164] Run: docker exec default-k8s-diff-port-696492 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:57:47.295743  311138 oci.go:144] the created container "default-k8s-diff-port-696492" has a running status.
	I1123 09:57:47.295783  311138 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa...
	I1123 09:57:47.562280  311138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:57:47.611801  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.650055  311138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:57:47.650078  311138 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-696492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:57:47.733580  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.763876  311138 machine.go:94] provisionDockerMachine start ...
	I1123 09:57:47.763997  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.798484  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.798947  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.798969  311138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:57:47.966787  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:47.966822  311138 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-696492"
	I1123 09:57:47.966888  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.993804  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.994099  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.994117  311138 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-696492 && echo "default-k8s-diff-port-696492" | sudo tee /etc/hostname
	I1123 09:57:48.174661  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:48.174752  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.203529  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:48.203843  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:48.203881  311138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-696492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-696492/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-696492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:57:48.379959  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:57:48.380002  311138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3552/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3552/.minikube}
	I1123 09:57:48.380096  311138 ubuntu.go:190] setting up certificates
	I1123 09:57:48.380127  311138 provision.go:84] configureAuth start
	I1123 09:57:48.380222  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.421922  311138 provision.go:143] copyHostCerts
	I1123 09:57:48.422045  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem, removing ...
	I1123 09:57:48.422074  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem
	I1123 09:57:48.422196  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem (1679 bytes)
	I1123 09:57:48.422353  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem, removing ...
	I1123 09:57:48.422365  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem
	I1123 09:57:48.422399  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem (1082 bytes)
	I1123 09:57:48.422467  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem, removing ...
	I1123 09:57:48.422523  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem
	I1123 09:57:48.422566  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem (1123 bytes)
	I1123 09:57:48.422642  311138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-696492 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-696492 localhost minikube]
	I1123 09:57:48.539621  311138 provision.go:177] copyRemoteCerts
	I1123 09:57:48.539708  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:57:48.539762  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.564284  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.677154  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:57:48.704807  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 09:57:48.730566  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:57:48.755362  311138 provision.go:87] duration metric: took 375.193527ms to configureAuth
	I1123 09:57:48.755396  311138 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:57:48.755732  311138 config.go:182] Loaded profile config "default-k8s-diff-port-696492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:48.755752  311138 machine.go:97] duration metric: took 991.839554ms to provisionDockerMachine
	I1123 09:57:48.755762  311138 client.go:176] duration metric: took 7.329805852s to LocalClient.Create
	I1123 09:57:48.755786  311138 start.go:167] duration metric: took 7.329894759s to libmachine.API.Create "default-k8s-diff-port-696492"
	I1123 09:57:48.755799  311138 start.go:293] postStartSetup for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:48.755811  311138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:57:48.755868  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:57:48.755919  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.784317  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.901734  311138 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:57:48.906292  311138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:57:48.906325  311138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:57:48.906355  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/addons for local assets ...
	I1123 09:57:48.906577  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/files for local assets ...
	I1123 09:57:48.906715  311138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem -> 71092.pem in /etc/ssl/certs
	I1123 09:57:48.906835  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:57:48.917431  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:57:48.947477  311138 start.go:296] duration metric: took 191.661634ms for postStartSetup
	I1123 09:57:48.947957  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.973141  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:48.973692  311138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:57:48.973751  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.996029  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.106682  311138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:57:49.112230  311138 start.go:128] duration metric: took 7.689569326s to createHost
	I1123 09:57:49.112259  311138 start.go:83] releasing machines lock for "default-k8s-diff-port-696492", held for 7.689795634s
	I1123 09:57:49.112351  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:49.135976  311138 ssh_runner.go:195] Run: cat /version.json
	I1123 09:57:49.136033  311138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:57:49.136042  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.136113  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.160077  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.161278  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.264125  311138 ssh_runner.go:195] Run: systemctl --version
	I1123 09:57:49.329282  311138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:57:49.335197  311138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:57:49.335268  311138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:57:49.366357  311138 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:57:49.366380  311138 start.go:496] detecting cgroup driver to use...
	I1123 09:57:49.366416  311138 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:57:49.366470  311138 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 09:57:49.383235  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 09:57:49.399768  311138 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:57:49.399842  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:57:49.420125  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:57:49.442300  311138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:57:49.541498  311138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:57:49.659194  311138 docker.go:234] disabling docker service ...
	I1123 09:57:49.659272  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:57:49.682070  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:57:49.698015  311138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:57:49.798105  311138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:57:49.894575  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:57:49.911733  311138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:57:49.931314  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 09:57:49.945424  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 09:57:49.956889  311138 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 09:57:49.956953  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 09:57:49.967923  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:49.979575  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 09:57:49.991202  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:50.002918  311138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:57:50.015086  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 09:57:50.027588  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 09:57:50.038500  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 09:57:50.050508  311138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:57:50.060907  311138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:57:50.069882  311138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:57:50.169936  311138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 09:57:50.287676  311138 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 09:57:50.287747  311138 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 09:57:50.292388  311138 start.go:564] Will wait 60s for crictl version
	I1123 09:57:50.292450  311138 ssh_runner.go:195] Run: which crictl
	I1123 09:57:50.296873  311138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:57:50.325533  311138 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 09:57:50.325605  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.350974  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.381808  311138 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	8d7f40f8f4e07       56cc512116c8f       10 seconds ago      Running             busybox                   0                   fef27a1a4d0d4       busybox                                          default
	d15093524dcf0       ead0a4a53df89       16 seconds ago      Running             coredns                   0                   1410c58ee49e1       coredns-5dd5756b68-gf5sx                         kube-system
	6188a0a11a558       6e38f40d628db       16 seconds ago      Running             storage-provisioner       0                   d10f215129879       storage-provisioner                              kube-system
	a1af83bb67492       409467f978b4a       27 seconds ago      Running             kindnet-cni               0                   0d60321491712       kindnet-tpvt2                                    kube-system
	e82a6fec044de       ea1030da44aa1       31 seconds ago      Running             kube-proxy                0                   11e7ed694601b       kube-proxy-sgv48                                 kube-system
	1b2964c416267       4be79c38a4bab       52 seconds ago      Running             kube-controller-manager   0                   2cc4143ea8b90       kube-controller-manager-old-k8s-version-709593   kube-system
	33f6ed017ec88       f6f496300a2ae       52 seconds ago      Running             kube-scheduler            0                   11295be3c0583       kube-scheduler-old-k8s-version-709593            kube-system
	9ab267968c030       bb5e0dde9054c       52 seconds ago      Running             kube-apiserver            0                   86d19ce97a6b1       kube-apiserver-old-k8s-version-709593            kube-system
	d4c298d1c8060       73deb9a3f7025       52 seconds ago      Running             etcd                      0                   2f9ec40d5f287       etcd-old-k8s-version-709593                      kube-system
	
	
	==> containerd <==
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.420590122Z" level=info msg="CreateContainer within sandbox \"d10f2151298793071f334a433fb6cfce4b8b35c05f27a6d4e58960cedbf96462\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc\""
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.421304727Z" level=info msg="StartContainer for \"6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc\""
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.423036667Z" level=info msg="connecting to shim 6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc" address="unix:///run/containerd/s/1f0be7d26635bbcb41f6c32b3d2f1385a50ecbc1dec74ce6548e85610e0cefc1" protocol=ttrpc version=3
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.423927224Z" level=info msg="CreateContainer within sandbox \"1410c58ee49e106f41592b5e6ae663765165c9b234249dacefc4e2eccebfec08\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f\""
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.424701663Z" level=info msg="StartContainer for \"d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f\""
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.425764608Z" level=info msg="connecting to shim d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f" address="unix:///run/containerd/s/fe12e30014183b4c11ebd3e6acfbe97fc1992c631d1626cb13faef4fe4d22ee6" protocol=ttrpc version=3
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.488919409Z" level=info msg="StartContainer for \"d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f\" returns successfully"
	Nov 23 09:57:34 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:34.489532054Z" level=info msg="StartContainer for \"6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc\" returns successfully"
	Nov 23 09:57:37 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:37.817959050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bea346d9-0dca-482c-b9f9-7b71741b18d7,Namespace:default,Attempt:0,}"
	Nov 23 09:57:37 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:37.866021477Z" level=info msg="connecting to shim fef27a1a4d0d4d0fd89a702b88e4f10a3d0f81a41d5a766dcd38d6273f063615" address="unix:///run/containerd/s/f66c8e58b533a67c21226ca176913c77f22823731a0ac223ff958c8fefe43b11" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:57:37 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:37.950965400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bea346d9-0dca-482c-b9f9-7b71741b18d7,Namespace:default,Attempt:0,} returns sandbox id \"fef27a1a4d0d4d0fd89a702b88e4f10a3d0f81a41d5a766dcd38d6273f063615\""
	Nov 23 09:57:37 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:37.953294596Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.223204984Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.224183979Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396648"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.226078502Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.228512955Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.229002948Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.275384117s"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.229045171Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.230910353Z" level=info msg="CreateContainer within sandbox \"fef27a1a4d0d4d0fd89a702b88e4f10a3d0f81a41d5a766dcd38d6273f063615\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.242585175Z" level=info msg="Container 8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.253136286Z" level=info msg="CreateContainer within sandbox \"fef27a1a4d0d4d0fd89a702b88e4f10a3d0f81a41d5a766dcd38d6273f063615\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4\""
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.253869141Z" level=info msg="StartContainer for \"8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4\""
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.258087383Z" level=info msg="connecting to shim 8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4" address="unix:///run/containerd/s/f66c8e58b533a67c21226ca176913c77f22823731a0ac223ff958c8fefe43b11" protocol=ttrpc version=3
	Nov 23 09:57:40 old-k8s-version-709593 containerd[660]: time="2025-11-23T09:57:40.328511725Z" level=info msg="StartContainer for \"8d7f40f8f4e0763efe28dd2b910dd945b4ad8925953ca7a945bf4566509889f4\" returns successfully"
	Nov 23 09:57:47 old-k8s-version-709593 containerd[660]: E1123 09:57:47.651496     660 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [d15093524dcf0f71add09a89666b6ef551f8abcfe19462f1f52e6396cfa9b90f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34931 - 60518 "HINFO IN 7244376839273605299.5052886007572092194. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04020687s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-709593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-709593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-709593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:57:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-709593
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:57:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:57:36 +0000   Sun, 23 Nov 2025 09:56:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:57:36 +0000   Sun, 23 Nov 2025 09:56:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:57:36 +0000   Sun, 23 Nov 2025 09:56:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:57:36 +0000   Sun, 23 Nov 2025 09:57:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-709593
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9e6f0832-18db-4c8d-86e4-20812ea439e5
	  Boot ID:                    e4c4d39b-bebd-4037-9237-26b945dbe084
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-5dd5756b68-gf5sx                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     33s
	  kube-system                 etcd-old-k8s-version-709593                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         45s
	  kube-system                 kindnet-tpvt2                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-old-k8s-version-709593             250m (3%)     0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-controller-manager-old-k8s-version-709593    200m (2%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-proxy-sgv48                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-old-k8s-version-709593             100m (1%)     0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-709593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-709593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x7 over 54s)  kubelet          Node old-k8s-version-709593 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  45s                kubelet          Node old-k8s-version-709593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s                kubelet          Node old-k8s-version-709593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s                kubelet          Node old-k8s-version-709593 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           33s                node-controller  Node old-k8s-version-709593 event: Registered Node old-k8s-version-709593 in Controller
	  Normal  NodeReady                18s                kubelet          Node old-k8s-version-709593 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.288463] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[Nov23 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.195562] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +5.912917] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 c0 1c 98 33 a9 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.002091] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 47 bd bf 96 57 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[  +4.460318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 85 b9 91 f8 a4 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +2.904694] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	[Nov23 09:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 48 bf 8b d1 fc 08 06
	[  +0.000931] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	
	
	==> etcd [d4c298d1c8060139c5bb973acee87dc3fbc6b6454b9e3c8ebe9c6b86a2e5a7b8] <==
	{"level":"info","ts":"2025-11-23T09:56:58.59753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T09:56:58.597864Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:56:58.597974Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:56:58.598004Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:56:58.599014Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-23T09:57:01.971736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.487229ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424543 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:monitoring\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:monitoring\" value_size:573 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:01.971868Z","caller":"traceutil/trace.go:171","msg":"trace[1367842110] transaction","detail":"{read_only:false; response_revision:112; number_of_response:1; }","duration":"185.333295ms","start":"2025-11-23T09:57:01.786515Z","end":"2025-11-23T09:57:01.971849Z","steps":["trace[1367842110] 'process raft request'  (duration: 59.969834ms)","trace[1367842110] 'compare'  (duration: 124.335128ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:02.204167Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.409698ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424553 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/view\" value_size:673 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:02.204261Z","caller":"traceutil/trace.go:171","msg":"trace[1142240257] transaction","detail":"{read_only:false; response_revision:117; number_of_response:1; }","duration":"141.084345ms","start":"2025-11-23T09:57:02.063163Z","end":"2025-11-23T09:57:02.204247Z","steps":["trace[1142240257] 'compare'  (duration: 132.298203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:02.49574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.58211ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424557 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-edit\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-edit\" value_size:1957 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:02.495841Z","caller":"traceutil/trace.go:171","msg":"trace[1763507131] transaction","detail":"{read_only:false; response_revision:119; number_of_response:1; }","duration":"249.990542ms","start":"2025-11-23T09:57:02.245837Z","end":"2025-11-23T09:57:02.495828Z","steps":["trace[1763507131] 'process raft request'  (duration: 121.258106ms)","trace[1763507131] 'compare'  (duration: 128.446744ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:02.811736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.743867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424559 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-view\" value_size:1862 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:02.811827Z","caller":"traceutil/trace.go:171","msg":"trace[334752838] linearizableReadLoop","detail":"{readStateIndex:125; appliedIndex:124; }","duration":"197.624876ms","start":"2025-11-23T09:57:02.614187Z","end":"2025-11-23T09:57:02.811812Z","steps":["trace[334752838] 'read index received'  (duration: 54.776357ms)","trace[334752838] 'applied index is now lower than readState.Index'  (duration: 142.846972ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:57:02.811874Z","caller":"traceutil/trace.go:171","msg":"trace[577911190] transaction","detail":"{read_only:false; response_revision:120; number_of_response:1; }","duration":"309.546043ms","start":"2025-11-23T09:57:02.502295Z","end":"2025-11-23T09:57:02.811841Z","steps":["trace[577911190] 'process raft request'  (duration: 166.630437ms)","trace[577911190] 'compare'  (duration: 142.557878ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:02.811926Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.752655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T09:57:02.811961Z","caller":"traceutil/trace.go:171","msg":"trace[450821894] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:120; }","duration":"197.79258ms","start":"2025-11-23T09:57:02.614154Z","end":"2025-11-23T09:57:02.811947Z","steps":["trace[450821894] 'agreement among raft nodes before linearized reading'  (duration: 197.694344ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:02.812003Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-23T09:57:02.50227Z","time spent":"309.683301ms","remote":"127.0.0.1:39468","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1917,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-view\" value_size:1862 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T09:57:03.126521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.304764ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424563 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:heapster\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:heapster\" value_size:579 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:03.126599Z","caller":"traceutil/trace.go:171","msg":"trace[1403684060] transaction","detail":"{read_only:false; response_revision:121; number_of_response:1; }","duration":"309.884743ms","start":"2025-11-23T09:57:02.816704Z","end":"2025-11-23T09:57:03.126589Z","steps":["trace[1403684060] 'process raft request'  (duration: 124.45761ms)","trace[1403684060] 'compare'  (duration: 185.120538ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:03.126635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-23T09:57:02.816683Z","time spent":"309.941015ms","remote":"127.0.0.1:39468","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":625,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:heapster\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:heapster\" value_size:579 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T09:57:03.378154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.573425ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356837419424567 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:node-problem-detector\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:node-problem-detector\" value_size:583 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T09:57:03.37825Z","caller":"traceutil/trace.go:171","msg":"trace[407529311] transaction","detail":"{read_only:false; response_revision:123; number_of_response:1; }","duration":"236.959494ms","start":"2025-11-23T09:57:03.141275Z","end":"2025-11-23T09:57:03.378235Z","steps":["trace[407529311] 'process raft request'  (duration: 119.236514ms)","trace[407529311] 'compare'  (duration: 117.440472ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:57:03.488901Z","caller":"traceutil/trace.go:171","msg":"trace[331049729] transaction","detail":"{read_only:false; response_revision:124; number_of_response:1; }","duration":"105.829119ms","start":"2025-11-23T09:57:03.38305Z","end":"2025-11-23T09:57:03.488879Z","steps":["trace[331049729] 'process raft request'  (duration: 105.359949ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:03.685992Z","caller":"traceutil/trace.go:171","msg":"trace[1238052414] transaction","detail":"{read_only:false; response_revision:127; number_of_response:1; }","duration":"180.587913ms","start":"2025-11-23T09:57:03.505382Z","end":"2025-11-23T09:57:03.68597Z","steps":["trace[1238052414] 'process raft request'  (duration: 128.699733ms)","trace[1238052414] 'compare'  (duration: 51.773911ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:57:44.684831Z","caller":"traceutil/trace.go:171","msg":"trace[671402052] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"110.153636ms","start":"2025-11-23T09:57:44.574655Z","end":"2025-11-23T09:57:44.684809Z","steps":["trace[671402052] 'process raft request'  (duration: 110.003906ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:57:51 up 40 min,  0 user,  load average: 5.55, 4.20, 2.64
	Linux old-k8s-version-709593 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a1af83bb6749287f8df2adaeff4c43c5820f5194cb24f7fe3eb5ef134893d93c] <==
	I1123 09:57:23.601786       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:57:23.602109       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:57:23.602284       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:57:23.602304       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:57:23.602318       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:57:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:57:23.855098       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:57:23.855140       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:57:23.855154       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:57:23.900801       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:57:24.355697       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:57:24.355735       1 metrics.go:72] Registering metrics
	I1123 09:57:24.355844       1 controller.go:711] "Syncing nftables rules"
	I1123 09:57:33.855972       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:57:33.856030       1 main.go:301] handling current node
	I1123 09:57:43.856054       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:57:43.856111       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9ab267968c030e0a3bce6b123e59cf0e26705c3742842d1fe84461463f48a663] <==
	I1123 09:57:00.606586       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 09:57:00.606625       1 aggregator.go:166] initial CRD sync complete...
	I1123 09:57:00.606634       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 09:57:00.606641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:57:00.606650       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:57:00.608306       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 09:57:00.609050       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 09:57:00.624076       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:57:00.649174       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 09:57:01.610779       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:57:01.702685       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:57:01.702703       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:57:04.338662       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:57:04.416324       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:57:04.524354       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:57:04.538023       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 09:57:04.540122       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 09:57:04.546988       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:57:04.575545       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 09:57:05.959109       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 09:57:05.975157       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:57:05.986661       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 09:57:17.926455       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 09:57:18.460236       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:57:47.744877       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.76.2:47470->192.168.76.2:10250: write: connection reset by peer
	
	
	==> kube-controller-manager [1b2964c41626762d3beb765fa131cc83c8eafa60068157afab3d1e775a761750] <==
	I1123 09:57:18.051120       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 09:57:18.052924       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-709593" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1123 09:57:18.132109       1 shared_informer.go:318] Caches are synced for attach detach
	I1123 09:57:18.349828       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-tndwj"
	I1123 09:57:18.372449       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gf5sx"
	I1123 09:57:18.406026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="452.070013ms"
	I1123 09:57:18.463224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.127396ms"
	I1123 09:57:18.483794       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sgv48"
	I1123 09:57:18.483871       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:57:18.504473       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tpvt2"
	I1123 09:57:18.560131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.571025ms"
	I1123 09:57:18.560538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="230.617µs"
	I1123 09:57:18.562358       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:57:18.562385       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 09:57:19.789485       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 09:57:19.808843       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-tndwj"
	I1123 09:57:19.823673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.107806ms"
	I1123 09:57:19.833064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.315043ms"
	I1123 09:57:19.833185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.73µs"
	I1123 09:57:33.949212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.096µs"
	I1123 09:57:33.981566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.706µs"
	I1123 09:57:35.176726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="136.892µs"
	I1123 09:57:35.214616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.894482ms"
	I1123 09:57:35.214767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.972µs"
	I1123 09:57:38.010283       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [e82a6fec044de994c043f2f9c5656e0c2a71e8e480ed8f7cca948de66ed51059] <==
	I1123 09:57:20.277594       1 server_others.go:69] "Using iptables proxy"
	I1123 09:57:20.292272       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 09:57:20.339595       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:57:20.344426       1 server_others.go:152] "Using iptables Proxier"
	I1123 09:57:20.344681       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 09:57:20.344815       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 09:57:20.344909       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 09:57:20.345726       1 server.go:846] "Version info" version="v1.28.0"
	I1123 09:57:20.345900       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:57:20.347106       1 config.go:188] "Starting service config controller"
	I1123 09:57:20.350153       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 09:57:20.349625       1 config.go:97] "Starting endpoint slice config controller"
	I1123 09:57:20.350452       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 09:57:20.350106       1 config.go:315] "Starting node config controller"
	I1123 09:57:20.350583       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 09:57:20.450547       1 shared_informer.go:318] Caches are synced for service config
	I1123 09:57:20.450714       1 shared_informer.go:318] Caches are synced for node config
	I1123 09:57:20.450744       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [33f6ed017ec882589a089aad6a009c657f1fc80298864259b48138233e264c91] <==
	W1123 09:57:01.700971       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 09:57:01.701017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 09:57:01.704770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 09:57:01.704814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 09:57:01.752559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 09:57:01.752596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 09:57:01.981985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 09:57:01.982024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 09:57:01.983872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 09:57:01.983905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 09:57:02.057453       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 09:57:02.057498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 09:57:02.144948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 09:57:02.145025       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 09:57:03.483078       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 09:57:03.483126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 09:57:03.561961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 09:57:03.562012       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 09:57:03.808694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 09:57:03.808744       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 09:57:03.860531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 09:57:03.860576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 09:57:03.972432       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 09:57:03.972478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1123 09:57:04.567087       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: W1123 09:57:18.547160    1519 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-709593" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-709593' and this object
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: E1123 09:57:18.547223    1519 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-709593" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-709593' and this object
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709145    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz9pq\" (UniqueName: \"kubernetes.io/projected/f5d963bd-a2f2-44d2-969c-d219c55aba33-kube-api-access-dz9pq\") pod \"kube-proxy-sgv48\" (UID: \"f5d963bd-a2f2-44d2-969c-d219c55aba33\") " pod="kube-system/kube-proxy-sgv48"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709218    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fd3daece-c28b-4efa-ae53-16c16790e5be-cni-cfg\") pod \"kindnet-tpvt2\" (UID: \"fd3daece-c28b-4efa-ae53-16c16790e5be\") " pod="kube-system/kindnet-tpvt2"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709250    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd3daece-c28b-4efa-ae53-16c16790e5be-xtables-lock\") pod \"kindnet-tpvt2\" (UID: \"fd3daece-c28b-4efa-ae53-16c16790e5be\") " pod="kube-system/kindnet-tpvt2"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709281    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6p4v\" (UniqueName: \"kubernetes.io/projected/fd3daece-c28b-4efa-ae53-16c16790e5be-kube-api-access-c6p4v\") pod \"kindnet-tpvt2\" (UID: \"fd3daece-c28b-4efa-ae53-16c16790e5be\") " pod="kube-system/kindnet-tpvt2"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709316    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5d963bd-a2f2-44d2-969c-d219c55aba33-lib-modules\") pod \"kube-proxy-sgv48\" (UID: \"f5d963bd-a2f2-44d2-969c-d219c55aba33\") " pod="kube-system/kube-proxy-sgv48"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709389    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd3daece-c28b-4efa-ae53-16c16790e5be-lib-modules\") pod \"kindnet-tpvt2\" (UID: \"fd3daece-c28b-4efa-ae53-16c16790e5be\") " pod="kube-system/kindnet-tpvt2"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709422    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f5d963bd-a2f2-44d2-969c-d219c55aba33-kube-proxy\") pod \"kube-proxy-sgv48\" (UID: \"f5d963bd-a2f2-44d2-969c-d219c55aba33\") " pod="kube-system/kube-proxy-sgv48"
	Nov 23 09:57:18 old-k8s-version-709593 kubelet[1519]: I1123 09:57:18.709454    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5d963bd-a2f2-44d2-969c-d219c55aba33-xtables-lock\") pod \"kube-proxy-sgv48\" (UID: \"f5d963bd-a2f2-44d2-969c-d219c55aba33\") " pod="kube-system/kube-proxy-sgv48"
	Nov 23 09:57:24 old-k8s-version-709593 kubelet[1519]: I1123 09:57:24.152873    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sgv48" podStartSLOduration=6.152803535 podCreationTimestamp="2025-11-23 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:21.2206232 +0000 UTC m=+15.292351138" watchObservedRunningTime="2025-11-23 09:57:24.152803535 +0000 UTC m=+18.224531435"
	Nov 23 09:57:24 old-k8s-version-709593 kubelet[1519]: I1123 09:57:24.153064    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-tpvt2" podStartSLOduration=2.534840269 podCreationTimestamp="2025-11-23 09:57:18 +0000 UTC" firstStartedPulling="2025-11-23 09:57:19.547788823 +0000 UTC m=+13.619516716" lastFinishedPulling="2025-11-23 09:57:23.165974087 +0000 UTC m=+17.237701980" observedRunningTime="2025-11-23 09:57:24.152485675 +0000 UTC m=+18.224213576" watchObservedRunningTime="2025-11-23 09:57:24.153025533 +0000 UTC m=+18.224753438"
	Nov 23 09:57:33 old-k8s-version-709593 kubelet[1519]: I1123 09:57:33.920548    1519 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 09:57:33 old-k8s-version-709593 kubelet[1519]: I1123 09:57:33.948876    1519 topology_manager.go:215] "Topology Admit Handler" podUID="9a493920-3739-4eb9-8426-3590a8f2ee51" podNamespace="kube-system" podName="coredns-5dd5756b68-gf5sx"
	Nov 23 09:57:33 old-k8s-version-709593 kubelet[1519]: I1123 09:57:33.949059    1519 topology_manager.go:215] "Topology Admit Handler" podUID="ba58926e-fdf3-4750-b44d-7c94a027737e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 09:57:34 old-k8s-version-709593 kubelet[1519]: I1123 09:57:34.123178    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724lb\" (UniqueName: \"kubernetes.io/projected/ba58926e-fdf3-4750-b44d-7c94a027737e-kube-api-access-724lb\") pod \"storage-provisioner\" (UID: \"ba58926e-fdf3-4750-b44d-7c94a027737e\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:34 old-k8s-version-709593 kubelet[1519]: I1123 09:57:34.123243    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ba58926e-fdf3-4750-b44d-7c94a027737e-tmp\") pod \"storage-provisioner\" (UID: \"ba58926e-fdf3-4750-b44d-7c94a027737e\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:34 old-k8s-version-709593 kubelet[1519]: I1123 09:57:34.123297    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzx7\" (UniqueName: \"kubernetes.io/projected/9a493920-3739-4eb9-8426-3590a8f2ee51-kube-api-access-5rzx7\") pod \"coredns-5dd5756b68-gf5sx\" (UID: \"9a493920-3739-4eb9-8426-3590a8f2ee51\") " pod="kube-system/coredns-5dd5756b68-gf5sx"
	Nov 23 09:57:34 old-k8s-version-709593 kubelet[1519]: I1123 09:57:34.123357    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a493920-3739-4eb9-8426-3590a8f2ee51-config-volume\") pod \"coredns-5dd5756b68-gf5sx\" (UID: \"9a493920-3739-4eb9-8426-3590a8f2ee51\") " pod="kube-system/coredns-5dd5756b68-gf5sx"
	Nov 23 09:57:35 old-k8s-version-709593 kubelet[1519]: I1123 09:57:35.176230    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gf5sx" podStartSLOduration=17.176168603 podCreationTimestamp="2025-11-23 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:35.175754843 +0000 UTC m=+29.247482743" watchObservedRunningTime="2025-11-23 09:57:35.176168603 +0000 UTC m=+29.247896503"
	Nov 23 09:57:35 old-k8s-version-709593 kubelet[1519]: I1123 09:57:35.204836    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.204788689 podCreationTimestamp="2025-11-23 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:35.19026469 +0000 UTC m=+29.261992589" watchObservedRunningTime="2025-11-23 09:57:35.204788689 +0000 UTC m=+29.276516592"
	Nov 23 09:57:37 old-k8s-version-709593 kubelet[1519]: I1123 09:57:37.507262    1519 topology_manager.go:215] "Topology Admit Handler" podUID="bea346d9-0dca-482c-b9f9-7b71741b18d7" podNamespace="default" podName="busybox"
	Nov 23 09:57:37 old-k8s-version-709593 kubelet[1519]: I1123 09:57:37.646410    1519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj5kg\" (UniqueName: \"kubernetes.io/projected/bea346d9-0dca-482c-b9f9-7b71741b18d7-kube-api-access-pj5kg\") pod \"busybox\" (UID: \"bea346d9-0dca-482c-b9f9-7b71741b18d7\") " pod="default/busybox"
	Nov 23 09:57:41 old-k8s-version-709593 kubelet[1519]: I1123 09:57:41.192410    1519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.9155870259999999 podCreationTimestamp="2025-11-23 09:57:37 +0000 UTC" firstStartedPulling="2025-11-23 09:57:37.952685082 +0000 UTC m=+32.024412966" lastFinishedPulling="2025-11-23 09:57:40.229447793 +0000 UTC m=+34.301175679" observedRunningTime="2025-11-23 09:57:41.192028507 +0000 UTC m=+35.263756408" watchObservedRunningTime="2025-11-23 09:57:41.192349739 +0000 UTC m=+35.264077634"
	Nov 23 09:57:47 old-k8s-version-709593 kubelet[1519]: E1123 09:57:47.744109    1519 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 192.168.76.2:34062->192.168.76.2:10010: write tcp 192.168.76.2:34062->192.168.76.2:10010: write: broken pipe
	
	
	==> storage-provisioner [6188a0a11a558ccfe4a936446819a158ec0f3ff08b1c7692bf3db57ce82539bc] <==
	I1123 09:57:34.497639       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:57:34.510426       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:57:34.510517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 09:57:34.519430       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:57:34.519625       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-709593_09fc0e4b-1f89-47c2-90c6-e8921583fe8f!
	I1123 09:57:34.522696       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89d02a34-1ced-4051-82ca-0198f46f6d6a", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-709593_09fc0e4b-1f89-47c2-90c6-e8921583fe8f became leader
	I1123 09:57:34.619835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-709593_09fc0e4b-1f89-47c2-90c6-e8921583fe8f!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-709593 -n old-k8s-version-709593
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-709593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (14.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (16.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-412583 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [37a908eb-6709-4200-8522-c8fe9a550046] Pending
helpers_test.go:352: "busybox" [37a908eb-6709-4200-8522-c8fe9a550046] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [37a908eb-6709-4200-8522-c8fe9a550046] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003404002s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-412583 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-412583
helpers_test.go:243: (dbg) docker inspect embed-certs-412583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277",
	        "Created": "2025-11-23T09:57:03.852986793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:57:03.913206148Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277/hosts",
	        "LogPath": "/var/lib/docker/containers/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277-json.log",
	        "Name": "/embed-certs-412583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-412583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-412583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277",
	                "LowerDir": "/var/lib/docker/overlay2/d3050ed3acfa540bcb83ba19967396acb2acfd1e83630f56cb159c37cebe8813-init/diff:/var/lib/docker/overlay2/c80a0dfdb81b7753b0a82e2bc6458805cbbad0a9ce5819c63e1d9b7b71ba226c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3050ed3acfa540bcb83ba19967396acb2acfd1e83630f56cb159c37cebe8813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3050ed3acfa540bcb83ba19967396acb2acfd1e83630f56cb159c37cebe8813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3050ed3acfa540bcb83ba19967396acb2acfd1e83630f56cb159c37cebe8813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-412583",
	                "Source": "/var/lib/docker/volumes/embed-certs-412583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-412583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-412583",
	                "name.minikube.sigs.k8s.io": "embed-certs-412583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c087577399c2df976fd2fa55e091b19ec6dcc6597777ebf6518d0fa151289ca2",
	            "SandboxKey": "/var/run/docker/netns/c087577399c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-412583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8ee659370d2c34a46a25b0fbc93ad5ac08fb612d1cf2c36db6da4f7931d8317d",
	                    "EndpointID": "d82b70ac28ef7ddb287ff63171846450ebb944a2de1446e3f8e6cc90441445a7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "5a:da:8c:69:c5:18",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-412583",
	                        "7a22543402f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412583 -n embed-certs-412583
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-412583 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-412583 logs -n 25: (1.209953791s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-676928 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /var/lib/kubelet/config.yaml                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status docker --all --full --no-pager                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat docker --no-pager                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/docker/daemon.json                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo docker system info                                                                                                                                       │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl status cri-docker --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat cri-docker --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cri-dockerd --version                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status containerd --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl cat containerd --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /lib/systemd/system/containerd.service                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/containerd/config.toml                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo containerd config dump                                                                                                                                   │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status crio --all --full --no-pager                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat crio --no-pager                                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                  │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo crio config                                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p bridge-676928                                                                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p disable-driver-mounts-178820                                                                                                                                                │ disable-driver-mounts-178820 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ start   │ -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ default-k8s-diff-port-696492 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-709593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                   │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ stop    │ -p old-k8s-version-709593 --alsologtostderr -v=3                                                                                                                               │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:57:41
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:57:41.194019  311138 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:57:41.194298  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194308  311138 out.go:374] Setting ErrFile to fd 2...
	I1123 09:57:41.194312  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194606  311138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:57:41.195144  311138 out.go:368] Setting JSON to false
	I1123 09:57:41.196591  311138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2400,"bootTime":1763889461,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:57:41.196668  311138 start.go:143] virtualization: kvm guest
	I1123 09:57:41.199167  311138 out.go:179] * [default-k8s-diff-port-696492] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:57:41.201043  311138 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:57:41.201094  311138 notify.go:221] Checking for updates...
	I1123 09:57:41.204382  311138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:57:41.206017  311138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:57:41.207959  311138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:57:41.209794  311138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:57:41.211809  311138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:57:41.214009  311138 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214105  311138 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214180  311138 config.go:182] Loaded profile config "old-k8s-version-709593": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 09:57:41.214271  311138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:57:41.241306  311138 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:57:41.241474  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.312013  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.299959199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.312116  311138 docker.go:319] overlay module found
	I1123 09:57:41.314243  311138 out.go:179] * Using the docker driver based on user configuration
	I1123 09:57:41.316002  311138 start.go:309] selected driver: docker
	I1123 09:57:41.316024  311138 start.go:927] validating driver "docker" against <nil>
	I1123 09:57:41.316037  311138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:57:41.316751  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.385595  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.373759534 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.385794  311138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:57:41.386023  311138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:41.388087  311138 out.go:179] * Using Docker driver with root privileges
	I1123 09:57:41.389651  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:41.389725  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:41.389738  311138 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:57:41.389816  311138 start.go:353] cluster config:
	{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:41.391556  311138 out.go:179] * Starting "default-k8s-diff-port-696492" primary control-plane node in "default-k8s-diff-port-696492" cluster
	I1123 09:57:41.392982  311138 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:57:41.394476  311138 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:57:41.395978  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:41.396028  311138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 09:57:41.396036  311138 cache.go:65] Caching tarball of preloaded images
	I1123 09:57:41.396075  311138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:57:41.396157  311138 preload.go:238] Found /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 09:57:41.396175  311138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:57:41.396320  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:41.396374  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json: {Name:mk3b81d8fd8561a54828649e3e510565221995b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:41.422089  311138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:57:41.422112  311138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:57:41.422133  311138 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:57:41.422177  311138 start.go:360] acquireMachinesLock for default-k8s-diff-port-696492: {Name:mkc8ee83ed2b7a995e355ddec223dfeea233bbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:57:41.422316  311138 start.go:364] duration metric: took 112.296µs to acquireMachinesLock for "default-k8s-diff-port-696492"
	I1123 09:57:41.422500  311138 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:57:41.422632  311138 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:57:37.251564  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	W1123 09:57:39.751746  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	I1123 09:57:42.255256  300017 node_ready.go:49] node "embed-certs-412583" is "Ready"
	I1123 09:57:42.255291  300017 node_ready.go:38] duration metric: took 11.507766088s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:57:42.255310  300017 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:42.255471  300017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:42.277737  300017 api_server.go:72] duration metric: took 12.028046262s to wait for apiserver process to appear ...
	I1123 09:57:42.277770  300017 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:42.277792  300017 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:57:42.285468  300017 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:57:42.287274  300017 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:42.287395  300017 api_server.go:131] duration metric: took 9.61454ms to wait for apiserver health ...
	I1123 09:57:42.287422  300017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:42.294433  300017 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:42.294478  300017 system_pods.go:61] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.294486  300017 system_pods.go:61] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.294493  300017 system_pods.go:61] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.294499  300017 system_pods.go:61] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.294505  300017 system_pods.go:61] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.294510  300017 system_pods.go:61] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.294515  300017 system_pods.go:61] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.294526  300017 system_pods.go:61] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.294539  300017 system_pods.go:74] duration metric: took 7.098728ms to wait for pod list to return data ...
	I1123 09:57:42.294549  300017 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:42.298321  300017 default_sa.go:45] found service account: "default"
	I1123 09:57:42.298368  300017 default_sa.go:55] duration metric: took 3.811774ms for default service account to be created ...
	I1123 09:57:42.298382  300017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:42.302807  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.302871  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.302887  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.302896  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.302903  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.302927  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.302937  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.302943  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.302954  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.303049  300017 retry.go:31] will retry after 268.599682ms: missing components: kube-dns
	I1123 09:57:42.577490  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.577531  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.577541  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.577550  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.577557  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.577563  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.577568  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.577573  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.577581  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.577600  300017 retry.go:31] will retry after 240.156475ms: missing components: kube-dns
	I1123 09:57:42.822131  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.822171  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.822177  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.822182  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.822186  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.822190  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.822194  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.822197  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.822202  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.822216  300017 retry.go:31] will retry after 383.926777ms: missing components: kube-dns
	I1123 09:57:43.211532  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:43.211575  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running
	I1123 09:57:43.211585  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:43.211592  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:43.211600  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:43.211608  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:43.211624  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:43.211635  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:43.211640  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running
	I1123 09:57:43.211650  300017 system_pods.go:126] duration metric: took 913.260942ms to wait for k8s-apps to be running ...
	I1123 09:57:43.211661  300017 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:43.211722  300017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:43.226055  300017 system_svc.go:56] duration metric: took 14.383207ms WaitForService to wait for kubelet
	I1123 09:57:43.226087  300017 kubeadm.go:587] duration metric: took 12.976401428s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:43.226108  300017 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:43.229492  300017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:43.229524  300017 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:43.229547  300017 node_conditions.go:105] duration metric: took 3.432669ms to run NodePressure ...
	I1123 09:57:43.229560  300017 start.go:242] waiting for startup goroutines ...
	I1123 09:57:43.229570  300017 start.go:247] waiting for cluster config update ...
	I1123 09:57:43.229583  300017 start.go:256] writing updated cluster config ...
	I1123 09:57:43.229975  300017 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:43.235596  300017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:43.243251  300017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.248984  300017 pod_ready.go:94] pod "coredns-66bc5c9577-8dgc7" is "Ready"
	I1123 09:57:43.249015  300017 pod_ready.go:86] duration metric: took 5.729453ms for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.251635  300017 pod_ready.go:83] waiting for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.256613  300017 pod_ready.go:94] pod "etcd-embed-certs-412583" is "Ready"
	I1123 09:57:43.256645  300017 pod_ready.go:86] duration metric: took 4.984583ms for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.259023  300017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.264242  300017 pod_ready.go:94] pod "kube-apiserver-embed-certs-412583" is "Ready"
	I1123 09:57:43.264273  300017 pod_ready.go:86] duration metric: took 5.223434ms for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.311182  300017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.642602  300017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412583" is "Ready"
	I1123 09:57:43.642637  300017 pod_ready.go:86] duration metric: took 331.426321ms for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.843849  300017 pod_ready.go:83] waiting for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.244623  300017 pod_ready.go:94] pod "kube-proxy-wm7k2" is "Ready"
	I1123 09:57:44.244667  300017 pod_ready.go:86] duration metric: took 400.77745ms for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.444056  300017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.843963  300017 pod_ready.go:94] pod "kube-scheduler-embed-certs-412583" is "Ready"
	I1123 09:57:44.843992  300017 pod_ready.go:86] duration metric: took 399.904179ms for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.844006  300017 pod_ready.go:40] duration metric: took 1.608365258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:44.891853  300017 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:44.964864  300017 out.go:179] * Done! kubectl is now configured to use "embed-certs-412583" cluster and "default" namespace by default
	W1123 09:57:41.488122  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	W1123 09:57:43.488201  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	I1123 09:57:43.988019  296642 node_ready.go:49] node "no-preload-309734" is "Ready"
	I1123 09:57:43.988052  296642 node_ready.go:38] duration metric: took 14.003534589s for node "no-preload-309734" to be "Ready" ...
	I1123 09:57:43.988069  296642 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:43.988149  296642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:44.008503  296642 api_server.go:72] duration metric: took 14.434117996s to wait for apiserver process to appear ...
	I1123 09:57:44.008530  296642 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:44.008551  296642 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:57:44.017109  296642 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:57:44.018176  296642 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:44.018200  296642 api_server.go:131] duration metric: took 9.663468ms to wait for apiserver health ...
	I1123 09:57:44.018208  296642 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:44.022287  296642 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:44.022324  296642 system_pods.go:61] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.022351  296642 system_pods.go:61] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.022364  296642 system_pods.go:61] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.022369  296642 system_pods.go:61] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.022375  296642 system_pods.go:61] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.022381  296642 system_pods.go:61] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.022387  296642 system_pods.go:61] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.022397  296642 system_pods.go:61] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.022406  296642 system_pods.go:74] duration metric: took 4.191598ms to wait for pod list to return data ...
	I1123 09:57:44.022421  296642 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:44.025262  296642 default_sa.go:45] found service account: "default"
	I1123 09:57:44.025287  296642 default_sa.go:55] duration metric: took 2.858313ms for default service account to be created ...
	I1123 09:57:44.025300  296642 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:44.028240  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.028269  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.028275  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.028281  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.028285  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.028289  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.028293  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.028296  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.028300  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.028346  296642 retry.go:31] will retry after 283.472429ms: missing components: kube-dns
	I1123 09:57:44.317300  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.317353  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.317361  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.317370  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.317376  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.317382  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.317387  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.317391  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.317397  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.317416  296642 retry.go:31] will retry after 321.7427ms: missing components: kube-dns
	I1123 09:57:44.689277  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.689322  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.689344  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.689353  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.689359  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.689366  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.689370  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.689375  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.689382  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.689411  296642 retry.go:31] will retry after 353.961831ms: missing components: kube-dns
	I1123 09:57:45.048995  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.049060  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.049069  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.049078  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.049084  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.049090  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.049099  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.049104  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.049116  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.049135  296642 retry.go:31] will retry after 412.630882ms: missing components: kube-dns
	I1123 09:57:45.607770  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.607816  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.607826  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.607836  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.607841  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.607847  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.607851  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.607856  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.607873  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.607891  296642 retry.go:31] will retry after 544.365573ms: missing components: kube-dns
	I1123 09:57:41.425584  311138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:57:41.425893  311138 start.go:159] libmachine.API.Create for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:41.425945  311138 client.go:173] LocalClient.Create starting
	I1123 09:57:41.426056  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem
	I1123 09:57:41.426100  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426121  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426185  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem
	I1123 09:57:41.426208  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426217  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426608  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:57:41.445568  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:57:41.445670  311138 network_create.go:284] running [docker network inspect default-k8s-diff-port-696492] to gather additional debugging logs...
	I1123 09:57:41.445697  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492
	W1123 09:57:41.465174  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 returned with exit code 1
	I1123 09:57:41.465216  311138 network_create.go:287] error running [docker network inspect default-k8s-diff-port-696492]: docker network inspect default-k8s-diff-port-696492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-696492 not found
	I1123 09:57:41.465236  311138 network_create.go:289] output of [docker network inspect default-k8s-diff-port-696492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-696492 not found
	
	** /stderr **
	I1123 09:57:41.465403  311138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:41.487255  311138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-de5cba392bb4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8d:f5:88:bc:8b} reservation:<nil>}
	I1123 09:57:41.488105  311138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e2eabbe85d5b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:f4:02:bd:23:31} reservation:<nil>}
	I1123 09:57:41.489037  311138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-22e47e96d08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:9e:83:f9:9f:f6} reservation:<nil>}
	I1123 09:57:41.489614  311138 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4fa988beb7cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:18:12:be:77:f6} reservation:<nil>}
	I1123 09:57:41.492079  311138 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d80820}
	I1123 09:57:41.492121  311138 network_create.go:124] attempt to create docker network default-k8s-diff-port-696492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 09:57:41.492171  311138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 default-k8s-diff-port-696492
	I1123 09:57:41.554538  311138 network_create.go:108] docker network default-k8s-diff-port-696492 192.168.85.0/24 created
	I1123 09:57:41.554588  311138 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-696492" container
	I1123 09:57:41.554664  311138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:57:41.575522  311138 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-696492 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:57:41.598058  311138 oci.go:103] Successfully created a docker volume default-k8s-diff-port-696492
	I1123 09:57:41.598141  311138 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-696492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --entrypoint /usr/bin/test -v default-k8s-diff-port-696492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:57:42.041176  311138 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-696492
	I1123 09:57:42.041254  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:42.041269  311138 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:57:42.041325  311138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:57:46.265821  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:46.265851  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running
	I1123 09:57:46.265856  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:46.265860  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:46.265863  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:46.265868  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:46.265870  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:46.265875  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:46.265879  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running
	I1123 09:57:46.265889  296642 system_pods.go:126] duration metric: took 2.240582653s to wait for k8s-apps to be running ...
	I1123 09:57:46.265903  296642 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:46.265972  296642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:46.283075  296642 system_svc.go:56] duration metric: took 17.161056ms WaitForService to wait for kubelet
	I1123 09:57:46.283105  296642 kubeadm.go:587] duration metric: took 16.70872571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:46.283128  296642 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:46.491444  296642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:46.491473  296642 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:46.491486  296642 node_conditions.go:105] duration metric: took 208.353263ms to run NodePressure ...
	I1123 09:57:46.491509  296642 start.go:242] waiting for startup goroutines ...
	I1123 09:57:46.491520  296642 start.go:247] waiting for cluster config update ...
	I1123 09:57:46.491533  296642 start.go:256] writing updated cluster config ...
	I1123 09:57:46.491804  296642 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:46.498152  296642 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:46.503240  296642 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.508998  296642 pod_ready.go:94] pod "coredns-66bc5c9577-sx25q" is "Ready"
	I1123 09:57:46.509028  296642 pod_ready.go:86] duration metric: took 5.757344ms for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.512072  296642 pod_ready.go:83] waiting for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.517750  296642 pod_ready.go:94] pod "etcd-no-preload-309734" is "Ready"
	I1123 09:57:46.517777  296642 pod_ready.go:86] duration metric: took 5.673234ms for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.520446  296642 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.525480  296642 pod_ready.go:94] pod "kube-apiserver-no-preload-309734" is "Ready"
	I1123 09:57:46.525513  296642 pod_ready.go:86] duration metric: took 5.036877ms for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.528196  296642 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.902790  296642 pod_ready.go:94] pod "kube-controller-manager-no-preload-309734" is "Ready"
	I1123 09:57:46.902815  296642 pod_ready.go:86] duration metric: took 374.588413ms for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.104263  296642 pod_ready.go:83] waiting for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.504876  296642 pod_ready.go:94] pod "kube-proxy-jpvhc" is "Ready"
	I1123 09:57:47.504999  296642 pod_ready.go:86] duration metric: took 400.696383ms for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.706275  296642 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104684  296642 pod_ready.go:94] pod "kube-scheduler-no-preload-309734" is "Ready"
	I1123 09:57:48.104720  296642 pod_ready.go:86] duration metric: took 398.41369ms for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104739  296642 pod_ready.go:40] duration metric: took 1.606531718s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:48.181507  296642 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:48.183959  296642 out.go:179] * Done! kubectl is now configured to use "no-preload-309734" cluster and "default" namespace by default
	I1123 09:57:46.740944  311138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.699532205s)
	I1123 09:57:46.741010  311138 kic.go:203] duration metric: took 4.699734046s to extract preloaded images to volume ...
	W1123 09:57:46.741179  311138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:57:46.741234  311138 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:57:46.741304  311138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:57:46.807009  311138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-696492 --name default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --network default-k8s-diff-port-696492 --ip 192.168.85.2 --volume default-k8s-diff-port-696492:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:57:47.199589  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Running}}
	I1123 09:57:47.220655  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.242623  311138 cli_runner.go:164] Run: docker exec default-k8s-diff-port-696492 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:57:47.295743  311138 oci.go:144] the created container "default-k8s-diff-port-696492" has a running status.
	I1123 09:57:47.295783  311138 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa...
	I1123 09:57:47.562280  311138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:57:47.611801  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.650055  311138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:57:47.650078  311138 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-696492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:57:47.733580  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.763876  311138 machine.go:94] provisionDockerMachine start ...
	I1123 09:57:47.763997  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.798484  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.798947  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.798969  311138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:57:47.966787  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:47.966822  311138 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-696492"
	I1123 09:57:47.966888  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.993804  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.994099  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.994117  311138 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-696492 && echo "default-k8s-diff-port-696492" | sudo tee /etc/hostname
	I1123 09:57:48.174661  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:48.174752  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.203529  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:48.203843  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:48.203881  311138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-696492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-696492/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-696492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:57:48.379959  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:57:48.380002  311138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3552/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3552/.minikube}
	I1123 09:57:48.380096  311138 ubuntu.go:190] setting up certificates
	I1123 09:57:48.380127  311138 provision.go:84] configureAuth start
	I1123 09:57:48.380222  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.421922  311138 provision.go:143] copyHostCerts
	I1123 09:57:48.422045  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem, removing ...
	I1123 09:57:48.422074  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem
	I1123 09:57:48.422196  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem (1679 bytes)
	I1123 09:57:48.422353  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem, removing ...
	I1123 09:57:48.422365  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem
	I1123 09:57:48.422399  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem (1082 bytes)
	I1123 09:57:48.422467  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem, removing ...
	I1123 09:57:48.422523  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem
	I1123 09:57:48.422566  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem (1123 bytes)
	I1123 09:57:48.422642  311138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-696492 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-696492 localhost minikube]
	I1123 09:57:48.539621  311138 provision.go:177] copyRemoteCerts
	I1123 09:57:48.539708  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:57:48.539762  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.564284  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.677154  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:57:48.704807  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 09:57:48.730566  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:57:48.755362  311138 provision.go:87] duration metric: took 375.193527ms to configureAuth
	I1123 09:57:48.755396  311138 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:57:48.755732  311138 config.go:182] Loaded profile config "default-k8s-diff-port-696492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:48.755752  311138 machine.go:97] duration metric: took 991.839554ms to provisionDockerMachine
	I1123 09:57:48.755762  311138 client.go:176] duration metric: took 7.329805852s to LocalClient.Create
	I1123 09:57:48.755786  311138 start.go:167] duration metric: took 7.329894759s to libmachine.API.Create "default-k8s-diff-port-696492"
	I1123 09:57:48.755799  311138 start.go:293] postStartSetup for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:48.755811  311138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:57:48.755868  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:57:48.755919  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.784317  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.901734  311138 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:57:48.906292  311138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:57:48.906325  311138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:57:48.906355  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/addons for local assets ...
	I1123 09:57:48.906577  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/files for local assets ...
	I1123 09:57:48.906715  311138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem -> 71092.pem in /etc/ssl/certs
	I1123 09:57:48.906835  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:57:48.917431  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:57:48.947477  311138 start.go:296] duration metric: took 191.661634ms for postStartSetup
	I1123 09:57:48.947957  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.973141  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:48.973692  311138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:57:48.973751  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.996029  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.106682  311138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:57:49.112230  311138 start.go:128] duration metric: took 7.689569326s to createHost
	I1123 09:57:49.112259  311138 start.go:83] releasing machines lock for "default-k8s-diff-port-696492", held for 7.689795634s
	I1123 09:57:49.112351  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:49.135976  311138 ssh_runner.go:195] Run: cat /version.json
	I1123 09:57:49.136033  311138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:57:49.136042  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.136113  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.160077  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.161278  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.264125  311138 ssh_runner.go:195] Run: systemctl --version
	I1123 09:57:49.329282  311138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:57:49.335197  311138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:57:49.335268  311138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:57:49.366357  311138 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:57:49.366380  311138 start.go:496] detecting cgroup driver to use...
	I1123 09:57:49.366416  311138 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:57:49.366470  311138 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 09:57:49.383235  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 09:57:49.399768  311138 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:57:49.399842  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:57:49.420125  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:57:49.442300  311138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:57:49.541498  311138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:57:49.659194  311138 docker.go:234] disabling docker service ...
	I1123 09:57:49.659272  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:57:49.682070  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:57:49.698015  311138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:57:49.798105  311138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:57:49.894575  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:57:49.911733  311138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:57:49.931314  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 09:57:49.945424  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 09:57:49.956889  311138 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 09:57:49.956953  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 09:57:49.967923  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:49.979575  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 09:57:49.991202  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:50.002918  311138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:57:50.015086  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 09:57:50.027588  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 09:57:50.038500  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 09:57:50.050508  311138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:57:50.060907  311138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:57:50.069882  311138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:57:50.169936  311138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 09:57:50.287676  311138 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 09:57:50.287747  311138 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 09:57:50.292388  311138 start.go:564] Will wait 60s for crictl version
	I1123 09:57:50.292450  311138 ssh_runner.go:195] Run: which crictl
	I1123 09:57:50.296873  311138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:57:50.325533  311138 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 09:57:50.325605  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.350974  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.381808  311138 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 09:57:50.383456  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:50.407801  311138 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 09:57:50.413000  311138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:57:50.425563  311138 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:57:50.425681  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:50.425728  311138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:57:50.458513  311138 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:57:50.458540  311138 containerd.go:534] Images already preloaded, skipping extraction
	I1123 09:57:50.458578  311138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:57:50.490466  311138 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:57:50.490488  311138 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:57:50.490496  311138 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 09:57:50.490604  311138 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-696492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:57:50.490683  311138 ssh_runner.go:195] Run: sudo crictl info
	I1123 09:57:50.519013  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:50.519047  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:50.519066  311138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:57:50.519093  311138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-696492 NodeName:default-k8s-diff-port-696492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:57:50.519249  311138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-696492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:57:50.519326  311138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:57:50.531186  311138 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:57:50.531258  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:57:50.540764  311138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 09:57:50.556738  311138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:57:50.577978  311138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1123 09:57:50.594432  311138 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:57:50.598984  311138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:57:50.611087  311138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:57:50.713969  311138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:57:50.738999  311138 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492 for IP: 192.168.85.2
	I1123 09:57:50.739022  311138 certs.go:195] generating shared ca certs ...
	I1123 09:57:50.739042  311138 certs.go:227] acquiring lock for ca certs: {Name:mkf0ec2efb8866dd9406da39e0a5f5dc931fd377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.739203  311138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key
	I1123 09:57:50.739256  311138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key
	I1123 09:57:50.739271  311138 certs.go:257] generating profile certs ...
	I1123 09:57:50.739364  311138 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key
	I1123 09:57:50.739382  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt with IP's: []
	I1123 09:57:50.902937  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt ...
	I1123 09:57:50.902975  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt: {Name:mk1be782fc73373be310b15837c277ec6685e2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.903176  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key ...
	I1123 09:57:50.903195  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key: {Name:mk6db5327a581ec783720f15c44b3730584ff35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.903326  311138 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1
	I1123 09:57:50.903367  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 09:57:51.007041  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 ...
	I1123 09:57:51.007079  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1: {Name:mk4d1a5fa60f123a8319b137c9ec74f1fa189955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.007285  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1 ...
	I1123 09:57:51.007298  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1: {Name:mkdd2b300e22459c4a8968bc56aef3e76c8f86f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.007514  311138 certs.go:382] copying /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt
	I1123 09:57:51.007636  311138 certs.go:386] copying /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key
	I1123 09:57:51.007701  311138 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key
	I1123 09:57:51.007715  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt with IP's: []
	I1123 09:57:51.045607  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt ...
	I1123 09:57:51.045642  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt: {Name:mkb29252ee6ba2f8bc8fb350259fbc7d524b689b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.045864  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key ...
	I1123 09:57:51.045887  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key: {Name:mk39c6b0c10f773b67a0a811d41c76d128d66647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.046116  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem (1338 bytes)
	W1123 09:57:51.046161  311138 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109_empty.pem, impossibly tiny 0 bytes
	I1123 09:57:51.046173  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:57:51.046197  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:57:51.046222  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:57:51.046245  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem (1679 bytes)
	I1123 09:57:51.046287  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:57:51.047046  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:57:51.071141  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:57:51.092546  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:57:51.116776  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:57:51.139235  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:57:51.160968  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:57:51.181315  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:57:51.203122  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:57:51.226401  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:57:51.252100  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem --> /usr/share/ca-certificates/7109.pem (1338 bytes)
	I1123 09:57:51.274287  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /usr/share/ca-certificates/71092.pem (1708 bytes)
	I1123 09:57:51.297105  311138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:57:51.313841  311138 ssh_runner.go:195] Run: openssl version
	I1123 09:57:51.322431  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:57:51.335037  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.339776  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.339848  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.383842  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:57:51.395820  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7109.pem && ln -fs /usr/share/ca-certificates/7109.pem /etc/ssl/certs/7109.pem"
	I1123 09:57:51.406811  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.411731  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:26 /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.411802  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.456262  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7109.pem /etc/ssl/certs/51391683.0"
	I1123 09:57:51.467466  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71092.pem && ln -fs /usr/share/ca-certificates/71092.pem /etc/ssl/certs/71092.pem"
	I1123 09:57:51.479299  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.484434  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:26 /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.484508  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.525183  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71092.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:57:51.535904  311138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:57:51.540741  311138 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:57:51.540806  311138 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:51.540889  311138 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 09:57:51.540937  311138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:57:51.573411  311138 cri.go:89] found id: ""
	I1123 09:57:51.573483  311138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:57:51.583208  311138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:57:51.592170  311138 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:57:51.592237  311138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:57:51.601224  311138 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:57:51.601243  311138 kubeadm.go:158] found existing configuration files:
	
	I1123 09:57:51.601292  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 09:57:51.610806  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:57:51.610871  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:57:51.619590  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 09:57:51.628676  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:57:51.628753  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:57:51.638382  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 09:57:51.648357  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:57:51.648452  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:57:51.657606  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 09:57:51.667094  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:57:51.667160  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:57:51.677124  311138 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:57:51.753028  311138 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 09:57:51.832851  311138 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	00150dfde10c5       56cc512116c8f       8 seconds ago       Running             busybox                   0                   387dc93d0a8cf       busybox                                      default
	db362a96711e6       52546a367cc9e       15 seconds ago      Running             coredns                   0                   f79b2dece7e26       coredns-66bc5c9577-8dgc7                     kube-system
	01f6da8fb3f7d       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   616ba95f738c5       storage-provisioner                          kube-system
	de43573b10ccd       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   a3928ac5eaafb       kindnet-f76c2                                kube-system
	c59b716fcc34d       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   9f1049e06b7be       kube-proxy-wm7k2                             kube-system
	ea002215dc5ff       7dd6aaa1717ab       40 seconds ago      Running             kube-scheduler            0                   8b2fee9d2694f       kube-scheduler-embed-certs-412583            kube-system
	786d0436a85fd       5f1f5298c888d       40 seconds ago      Running             etcd                      0                   8100c8a61784d       etcd-embed-certs-412583                      kube-system
	72aa47eb89fbb       c3994bc696102       40 seconds ago      Running             kube-apiserver            0                   179cb11cf0ad3       kube-apiserver-embed-certs-412583            kube-system
	0275433c40df6       c80c8dbafe7dd       40 seconds ago      Running             kube-controller-manager   0                   8a49c491842a3       kube-controller-manager-embed-certs-412583   kube-system
	
	
	==> containerd <==
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.263149253Z" level=info msg="connecting to shim 01f6da8fb3f7dfb36a0d1bf7ac34fa2c7715a85d4db29e51e680371cf976de98" address="unix:///run/containerd/s/a7b6f230a299bda0a1f0d256e0bd0247043fa02e595c6d77c8c5ff35955b1815" protocol=ttrpc version=3
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.264257110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8dgc7,Uid:f685cc03-30df-4119-9d66-0e808c2d3c93,Namespace:kube-system,Attempt:0,} returns sandbox id \"f79b2dece7e264261986c54a3329a94f4a2f31499e5aa8db86f0bd2ff6e4e3cc\""
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.277437170Z" level=info msg="CreateContainer within sandbox \"f79b2dece7e264261986c54a3329a94f4a2f31499e5aa8db86f0bd2ff6e4e3cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.287168811Z" level=info msg="Container db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.297178125Z" level=info msg="CreateContainer within sandbox \"f79b2dece7e264261986c54a3329a94f4a2f31499e5aa8db86f0bd2ff6e4e3cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4\""
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.298310630Z" level=info msg="StartContainer for \"db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4\""
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.301376554Z" level=info msg="connecting to shim db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4" address="unix:///run/containerd/s/193a80da954a991752534a897a9195e52bba571ad363258772fc97fd3f38dac6" protocol=ttrpc version=3
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.336587190Z" level=info msg="StartContainer for \"01f6da8fb3f7dfb36a0d1bf7ac34fa2c7715a85d4db29e51e680371cf976de98\" returns successfully"
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.375298903Z" level=info msg="StartContainer for \"db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4\" returns successfully"
	Nov 23 09:57:45 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:45.977288983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:37a908eb-6709-4200-8522-c8fe9a550046,Namespace:default,Attempt:0,}"
	Nov 23 09:57:46 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:46.660292281Z" level=info msg="connecting to shim 387dc93d0a8cf354ec95ce64993a0addad111dcef088fbc67260f0afeb734d60" address="unix:///run/containerd/s/fdaadc5b3798fd4dbeaa013b873623cdfb02e487051186fb770f84af6b6bfa04" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:57:46 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:46.736677153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:37a908eb-6709-4200-8522-c8fe9a550046,Namespace:default,Attempt:0,} returns sandbox id \"387dc93d0a8cf354ec95ce64993a0addad111dcef088fbc67260f0afeb734d60\""
	Nov 23 09:57:46 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:46.739095430Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.915793673Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.916930025Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.918571491Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.921809864Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.922431651Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.183293897s"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.922489800Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.927784863Z" level=info msg="CreateContainer within sandbox \"387dc93d0a8cf354ec95ce64993a0addad111dcef088fbc67260f0afeb734d60\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.937973254Z" level=info msg="Container 00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.948097816Z" level=info msg="CreateContainer within sandbox \"387dc93d0a8cf354ec95ce64993a0addad111dcef088fbc67260f0afeb734d60\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223\""
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.948875044Z" level=info msg="StartContainer for \"00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223\""
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.949990703Z" level=info msg="connecting to shim 00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223" address="unix:///run/containerd/s/fdaadc5b3798fd4dbeaa013b873623cdfb02e487051186fb770f84af6b6bfa04" protocol=ttrpc version=3
	Nov 23 09:57:49 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:49.019759190Z" level=info msg="StartContainer for \"00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223\" returns successfully"
	
	
	==> coredns [db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49636 - 8471 "HINFO IN 3150291320313079176.8990028570470516833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.165694981s
	
	
	==> describe nodes <==
	Name:               embed-certs-412583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-412583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-412583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_57_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:57:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-412583
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:57:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:57:54 +0000   Sun, 23 Nov 2025 09:57:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:57:54 +0000   Sun, 23 Nov 2025 09:57:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:57:54 +0000   Sun, 23 Nov 2025 09:57:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:57:54 +0000   Sun, 23 Nov 2025 09:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-412583
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9da7e891-3f25-4983-8fba-6666bb3db318
	  Boot ID:                    e4c4d39b-bebd-4037-9237-26b945dbe084
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-8dgc7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-embed-certs-412583                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-f76c2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-embed-certs-412583             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-embed-certs-412583    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-wm7k2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-embed-certs-412583             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node embed-certs-412583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node embed-certs-412583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node embed-certs-412583 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node embed-certs-412583 event: Registered Node embed-certs-412583 in Controller
	  Normal  NodeReady                16s   kubelet          Node embed-certs-412583 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.288463] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[Nov23 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.195562] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +5.912917] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 c0 1c 98 33 a9 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.002091] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 47 bd bf 96 57 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[  +4.460318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 85 b9 91 f8 a4 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +2.904694] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	[Nov23 09:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 48 bf 8b d1 fc 08 06
	[  +0.000931] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	
	
	==> etcd [786d0436a85fd77d6e60804d917a286d3d71195fdb79aff7ac861499ed514dbf] <==
	{"level":"warn","ts":"2025-11-23T09:57:20.187808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.202358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.216919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.236022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.249228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.265771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.277721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.287910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.301044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.317683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.333908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.350810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.368555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.457172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45970","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:57:45.481895Z","caller":"traceutil/trace.go:172","msg":"trace[2106743918] linearizableReadLoop","detail":"{readStateIndex:441; appliedIndex:441; }","duration":"149.262873ms","start":"2025-11-23T09:57:45.332612Z","end":"2025-11-23T09:57:45.481875Z","steps":["trace[2106743918] 'read index received'  (duration: 149.255305ms)","trace[2106743918] 'applied index is now lower than readState.Index'  (duration: 6.205µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:45.595660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.52134ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:45.595740Z","caller":"traceutil/trace.go:172","msg":"trace[443334218] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:425; }","duration":"200.603744ms","start":"2025-11-23T09:57:45.395120Z","end":"2025-11-23T09:57:45.595724Z","steps":["trace[443334218] 'range keys from in-memory index tree'  (duration: 200.48665ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:45.595661Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.034946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:45.595792Z","caller":"traceutil/trace.go:172","msg":"trace[909354569] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:425; }","duration":"263.192733ms","start":"2025-11-23T09:57:45.332591Z","end":"2025-11-23T09:57:45.595784Z","steps":["trace[909354569] 'agreement among raft nodes before linearized reading'  (duration: 149.371763ms)","trace[909354569] 'range keys from in-memory index tree'  (duration: 113.623338ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:45.596376Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.821874ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790208425156421 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:425 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:1260 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:57:45.596515Z","caller":"traceutil/trace.go:172","msg":"trace[1557009709] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"273.56361ms","start":"2025-11-23T09:57:45.322937Z","end":"2025-11-23T09:57:45.596501Z","steps":["trace[1557009709] 'process raft request'  (duration: 159.045437ms)","trace[1557009709] 'compare'  (duration: 113.59463ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:57:45.790683Z","caller":"traceutil/trace.go:172","msg":"trace[1096355566] linearizableReadLoop","detail":"{readStateIndex:443; appliedIndex:443; }","duration":"149.60145ms","start":"2025-11-23T09:57:45.641048Z","end":"2025-11-23T09:57:45.790649Z","steps":["trace[1096355566] 'read index received'  (duration: 149.590767ms)","trace[1096355566] 'applied index is now lower than readState.Index'  (duration: 9.211µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:45.801745Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.673197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:45.801813Z","caller":"traceutil/trace.go:172","msg":"trace[26794502] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:427; }","duration":"160.761222ms","start":"2025-11-23T09:57:45.641038Z","end":"2025-11-23T09:57:45.801800Z","steps":["trace[26794502] 'agreement among raft nodes before linearized reading'  (duration: 149.705905ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:45.801900Z","caller":"traceutil/trace.go:172","msg":"trace[1536416025] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"171.194558ms","start":"2025-11-23T09:57:45.630689Z","end":"2025-11-23T09:57:45.801883Z","steps":["trace[1536416025] 'process raft request'  (duration: 160.021394ms)","trace[1536416025] 'compare'  (duration: 11.07365ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:57:57 up 40 min,  0 user,  load average: 4.93, 4.11, 2.62
	Linux embed-certs-412583 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [de43573b10ccd2db93907531b927156400b38e1ccc072df4694f86271eadb2a7] <==
	I1123 09:57:31.366093       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:57:31.366394       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 09:57:31.366574       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:57:31.366591       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:57:31.366613       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:57:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:57:31.662809       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:57:31.662848       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:57:31.662910       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:57:31.663873       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:57:32.132788       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:57:32.132836       1 metrics.go:72] Registering metrics
	I1123 09:57:32.132932       1 controller.go:711] "Syncing nftables rules"
	I1123 09:57:41.663029       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:57:41.663106       1 main.go:301] handling current node
	I1123 09:57:51.663064       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:57:51.663097       1 main.go:301] handling current node
	
	
	==> kube-apiserver [72aa47eb89fbb59da47429e762a23f4e68077fe27b50deb7d4860da7370e5f9b] <==
	I1123 09:57:21.236407       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:57:21.236459       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:57:21.251314       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:21.255073       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:57:21.276590       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:21.280570       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:57:21.440092       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:57:22.162976       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:57:22.171528       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:57:22.171550       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:57:23.042887       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:57:23.156762       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:57:23.262042       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:57:23.270761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 09:57:23.272071       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:57:23.277943       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:57:24.058407       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:57:24.064939       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:57:24.078625       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:57:24.088680       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:57:29.356169       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:29.361491       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:29.855754       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:57:30.203249       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:57:56.444276       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:46964: use of closed network connection
	
	
	==> kube-controller-manager [0275433c40df693012ccd198e9424273105899b21f0e3e75bc2219ef022bdec2] <==
	I1123 09:57:29.151404       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:57:29.151569       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:57:29.151629       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:57:29.151623       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:57:29.151898       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:57:29.151953       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:57:29.152073       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:57:29.152128       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:57:29.152142       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:57:29.152152       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:57:29.153392       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 09:57:29.153531       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 09:57:29.153605       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 09:57:29.153612       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:57:29.153619       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:57:29.153797       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:57:29.155189       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:57:29.158317       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:57:29.166804       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:57:29.169079       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-412583" podCIDRs=["10.244.0.0/24"]
	I1123 09:57:29.178972       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:57:29.199943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:57:29.199967       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:57:29.199979       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:57:44.150263       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c59b716fcc34de4cd73575b55a3765828129eb26a8da3f4e32971f259a35d5b9] <==
	I1123 09:57:30.916623       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:57:30.988032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:57:31.088790       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:57:31.088849       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 09:57:31.088971       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:57:31.116731       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:57:31.116825       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:57:31.123212       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:57:31.123727       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:57:31.123771       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:57:31.127038       1 config.go:200] "Starting service config controller"
	I1123 09:57:31.127074       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:57:31.127211       1 config.go:309] "Starting node config controller"
	I1123 09:57:31.127237       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:57:31.127260       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:57:31.127265       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:57:31.127261       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:57:31.127310       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:57:31.227299       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:57:31.227378       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:57:31.227389       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:57:31.227411       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ea002215dc5ff9de708bfb501c13731db3b837342413eaa850d2bdaa9db3326b] <==
	E1123 09:57:21.199364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:57:21.199528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:57:21.199629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:57:21.199733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:57:21.199855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:57:21.199851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:57:22.065614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:57:22.077308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:57:22.103659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:57:22.156471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:57:22.161197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:57:22.215761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:57:22.276078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:57:22.357615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:57:22.371618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:57:22.401762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:57:22.419069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:57:22.477267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:57:22.516489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:57:22.518696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:57:22.565990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:57:22.586754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:57:22.638652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:57:22.714174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1123 09:57:25.092978       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:57:24 embed-certs-412583 kubelet[1414]: E1123 09:57:24.939085    1414 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-embed-certs-412583\" already exists" pod="kube-system/kube-controller-manager-embed-certs-412583"
	Nov 23 09:57:24 embed-certs-412583 kubelet[1414]: I1123 09:57:24.980038    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-412583" podStartSLOduration=0.980011304 podStartE2EDuration="980.011304ms" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:24.954274858 +0000 UTC m=+1.143838257" watchObservedRunningTime="2025-11-23 09:57:24.980011304 +0000 UTC m=+1.169574704"
	Nov 23 09:57:25 embed-certs-412583 kubelet[1414]: I1123 09:57:25.006846    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-412583" podStartSLOduration=1.006606971 podStartE2EDuration="1.006606971s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:24.987760256 +0000 UTC m=+1.177323653" watchObservedRunningTime="2025-11-23 09:57:25.006606971 +0000 UTC m=+1.196170387"
	Nov 23 09:57:25 embed-certs-412583 kubelet[1414]: I1123 09:57:25.007093    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-412583" podStartSLOduration=3.007077201 podStartE2EDuration="3.007077201s" podCreationTimestamp="2025-11-23 09:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.006544168 +0000 UTC m=+1.196107566" watchObservedRunningTime="2025-11-23 09:57:25.007077201 +0000 UTC m=+1.196640602"
	Nov 23 09:57:25 embed-certs-412583 kubelet[1414]: I1123 09:57:25.045418    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-412583" podStartSLOduration=1.04539165 podStartE2EDuration="1.04539165s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.023834862 +0000 UTC m=+1.213398262" watchObservedRunningTime="2025-11-23 09:57:25.04539165 +0000 UTC m=+1.234955049"
	Nov 23 09:57:29 embed-certs-412583 kubelet[1414]: I1123 09:57:29.207810    1414 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:57:29 embed-certs-412583 kubelet[1414]: I1123 09:57:29.209108    1414 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.337770    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/16967e76-b4bf-4a99-aab9-d7f76cbb0830-cni-cfg\") pod \"kindnet-f76c2\" (UID: \"16967e76-b4bf-4a99-aab9-d7f76cbb0830\") " pod="kube-system/kindnet-f76c2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.338143    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9-xtables-lock\") pod \"kube-proxy-wm7k2\" (UID: \"120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9\") " pod="kube-system/kube-proxy-wm7k2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.340545    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16967e76-b4bf-4a99-aab9-d7f76cbb0830-lib-modules\") pod \"kindnet-f76c2\" (UID: \"16967e76-b4bf-4a99-aab9-d7f76cbb0830\") " pod="kube-system/kindnet-f76c2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.342812    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfk2\" (UniqueName: \"kubernetes.io/projected/16967e76-b4bf-4a99-aab9-d7f76cbb0830-kube-api-access-8cfk2\") pod \"kindnet-f76c2\" (UID: \"16967e76-b4bf-4a99-aab9-d7f76cbb0830\") " pod="kube-system/kindnet-f76c2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.343058    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9-kube-proxy\") pod \"kube-proxy-wm7k2\" (UID: \"120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9\") " pod="kube-system/kube-proxy-wm7k2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.343664    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2krm\" (UniqueName: \"kubernetes.io/projected/120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9-kube-api-access-w2krm\") pod \"kube-proxy-wm7k2\" (UID: \"120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9\") " pod="kube-system/kube-proxy-wm7k2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.344587    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16967e76-b4bf-4a99-aab9-d7f76cbb0830-xtables-lock\") pod \"kindnet-f76c2\" (UID: \"16967e76-b4bf-4a99-aab9-d7f76cbb0830\") " pod="kube-system/kindnet-f76c2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.344818    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9-lib-modules\") pod \"kube-proxy-wm7k2\" (UID: \"120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9\") " pod="kube-system/kube-proxy-wm7k2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.976417    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wm7k2" podStartSLOduration=0.97639176 podStartE2EDuration="976.39176ms" podCreationTimestamp="2025-11-23 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:30.97175995 +0000 UTC m=+7.161323349" watchObservedRunningTime="2025-11-23 09:57:30.97639176 +0000 UTC m=+7.165955158"
	Nov 23 09:57:31 embed-certs-412583 kubelet[1414]: I1123 09:57:31.965243    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-f76c2" podStartSLOduration=1.965220701 podStartE2EDuration="1.965220701s" podCreationTimestamp="2025-11-23 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:31.965049893 +0000 UTC m=+8.154613292" watchObservedRunningTime="2025-11-23 09:57:31.965220701 +0000 UTC m=+8.154784100"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.764467    1414 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.921311    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pczs\" (UniqueName: \"kubernetes.io/projected/f685cc03-30df-4119-9d66-0e808c2d3c93-kube-api-access-4pczs\") pod \"coredns-66bc5c9577-8dgc7\" (UID: \"f685cc03-30df-4119-9d66-0e808c2d3c93\") " pod="kube-system/coredns-66bc5c9577-8dgc7"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.921501    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f685cc03-30df-4119-9d66-0e808c2d3c93-config-volume\") pod \"coredns-66bc5c9577-8dgc7\" (UID: \"f685cc03-30df-4119-9d66-0e808c2d3c93\") " pod="kube-system/coredns-66bc5c9577-8dgc7"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.921540    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dcf16920-e30b-42ab-8195-4ef946498d0f-tmp\") pod \"storage-provisioner\" (UID: \"dcf16920-e30b-42ab-8195-4ef946498d0f\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.921560    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6rhp\" (UniqueName: \"kubernetes.io/projected/dcf16920-e30b-42ab-8195-4ef946498d0f-kube-api-access-z6rhp\") pod \"storage-provisioner\" (UID: \"dcf16920-e30b-42ab-8195-4ef946498d0f\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:43 embed-certs-412583 kubelet[1414]: I1123 09:57:43.000608    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8dgc7" podStartSLOduration=13.000583929 podStartE2EDuration="13.000583929s" podCreationTimestamp="2025-11-23 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:43.000544713 +0000 UTC m=+19.190108137" watchObservedRunningTime="2025-11-23 09:57:43.000583929 +0000 UTC m=+19.190147342"
	Nov 23 09:57:43 embed-certs-412583 kubelet[1414]: I1123 09:57:43.030945    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.030922513 podStartE2EDuration="12.030922513s" podCreationTimestamp="2025-11-23 09:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:43.014461805 +0000 UTC m=+19.204025204" watchObservedRunningTime="2025-11-23 09:57:43.030922513 +0000 UTC m=+19.220485912"
	Nov 23 09:57:45 embed-certs-412583 kubelet[1414]: I1123 09:57:45.747146    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4vb\" (UniqueName: \"kubernetes.io/projected/37a908eb-6709-4200-8522-c8fe9a550046-kube-api-access-8q4vb\") pod \"busybox\" (UID: \"37a908eb-6709-4200-8522-c8fe9a550046\") " pod="default/busybox"
	
	
	==> storage-provisioner [01f6da8fb3f7dfb36a0d1bf7ac34fa2c7715a85d4db29e51e680371cf976de98] <==
	I1123 09:57:42.345743       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:57:42.354860       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:57:42.355022       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:57:42.358134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:42.365209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:57:42.365571       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:57:42.365706       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4cb99382-7f2c-4efe-9082-eae1f39758b2", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-412583_c2d51ccd-86cc-409b-a8dd-4eb050378ace became leader
	I1123 09:57:42.365777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-412583_c2d51ccd-86cc-409b-a8dd-4eb050378ace!
	W1123 09:57:42.369067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:42.373535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:57:42.466312       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-412583_c2d51ccd-86cc-409b-a8dd-4eb050378ace!
	W1123 09:57:44.377239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:44.386889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:46.390510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:46.425061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:48.433040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:48.445234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:50.449853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:50.456157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:52.460081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:52.466504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:54.470173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:54.475406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:56.478726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:56.484037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412583 -n embed-certs-412583
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-412583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-412583
helpers_test.go:243: (dbg) docker inspect embed-certs-412583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277",
	        "Created": "2025-11-23T09:57:03.852986793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:57:03.913206148Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277/hosts",
	        "LogPath": "/var/lib/docker/containers/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277/7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277-json.log",
	        "Name": "/embed-certs-412583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-412583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-412583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7a22543402f85200cf585d677534a344930a0584785d3b8b562dd83ade581277",
	                "LowerDir": "/var/lib/docker/overlay2/d3050ed3acfa540bcb83ba19967396acb2acfd1e83630f56cb159c37cebe8813-init/diff:/var/lib/docker/overlay2/c80a0dfdb81b7753b0a82e2bc6458805cbbad0a9ce5819c63e1d9b7b71ba226c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3050ed3acfa540bcb83ba19967396acb2acfd1e83630f56cb159c37cebe8813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3050ed3acfa540bcb83ba19967396acb2acfd1e83630f56cb159c37cebe8813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3050ed3acfa540bcb83ba19967396acb2acfd1e83630f56cb159c37cebe8813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-412583",
	                "Source": "/var/lib/docker/volumes/embed-certs-412583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-412583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-412583",
	                "name.minikube.sigs.k8s.io": "embed-certs-412583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c087577399c2df976fd2fa55e091b19ec6dcc6597777ebf6518d0fa151289ca2",
	            "SandboxKey": "/var/run/docker/netns/c087577399c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-412583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8ee659370d2c34a46a25b0fbc93ad5ac08fb612d1cf2c36db6da4f7931d8317d",
	                    "EndpointID": "d82b70ac28ef7ddb287ff63171846450ebb944a2de1446e3f8e6cc90441445a7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "5a:da:8c:69:c5:18",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-412583",
	                        "7a22543402f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412583 -n embed-certs-412583
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-412583 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-412583 logs -n 25: (1.478849173s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-676928 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /var/lib/kubelet/config.yaml                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status docker --all --full --no-pager                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat docker --no-pager                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/docker/daemon.json                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo docker system info                                                                                                                                       │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl status cri-docker --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat cri-docker --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cri-dockerd --version                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status containerd --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl cat containerd --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /lib/systemd/system/containerd.service                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/containerd/config.toml                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo containerd config dump                                                                                                                                   │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status crio --all --full --no-pager                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat crio --no-pager                                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                  │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo crio config                                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p bridge-676928                                                                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p disable-driver-mounts-178820                                                                                                                                                │ disable-driver-mounts-178820 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ start   │ -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ default-k8s-diff-port-696492 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-709593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                   │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ stop    │ -p old-k8s-version-709593 --alsologtostderr -v=3                                                                                                                               │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:57:41
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:57:41.194019  311138 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:57:41.194298  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194308  311138 out.go:374] Setting ErrFile to fd 2...
	I1123 09:57:41.194312  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194606  311138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:57:41.195144  311138 out.go:368] Setting JSON to false
	I1123 09:57:41.196591  311138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2400,"bootTime":1763889461,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:57:41.196668  311138 start.go:143] virtualization: kvm guest
	I1123 09:57:41.199167  311138 out.go:179] * [default-k8s-diff-port-696492] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:57:41.201043  311138 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:57:41.201094  311138 notify.go:221] Checking for updates...
	I1123 09:57:41.204382  311138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:57:41.206017  311138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:57:41.207959  311138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:57:41.209794  311138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:57:41.211809  311138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:57:41.214009  311138 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214105  311138 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214180  311138 config.go:182] Loaded profile config "old-k8s-version-709593": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 09:57:41.214271  311138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:57:41.241306  311138 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:57:41.241474  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.312013  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.299959199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.312116  311138 docker.go:319] overlay module found
	I1123 09:57:41.314243  311138 out.go:179] * Using the docker driver based on user configuration
	I1123 09:57:41.316002  311138 start.go:309] selected driver: docker
	I1123 09:57:41.316024  311138 start.go:927] validating driver "docker" against <nil>
	I1123 09:57:41.316037  311138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:57:41.316751  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.385595  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.373759534 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.385794  311138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:57:41.386023  311138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:41.388087  311138 out.go:179] * Using Docker driver with root privileges
	I1123 09:57:41.389651  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:41.389725  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:41.389738  311138 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:57:41.389816  311138 start.go:353] cluster config:
	{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:41.391556  311138 out.go:179] * Starting "default-k8s-diff-port-696492" primary control-plane node in "default-k8s-diff-port-696492" cluster
	I1123 09:57:41.392982  311138 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:57:41.394476  311138 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:57:41.395978  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:41.396028  311138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 09:57:41.396036  311138 cache.go:65] Caching tarball of preloaded images
	I1123 09:57:41.396075  311138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:57:41.396157  311138 preload.go:238] Found /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 09:57:41.396175  311138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:57:41.396320  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:41.396374  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json: {Name:mk3b81d8fd8561a54828649e3e510565221995b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:41.422089  311138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:57:41.422112  311138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:57:41.422133  311138 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:57:41.422177  311138 start.go:360] acquireMachinesLock for default-k8s-diff-port-696492: {Name:mkc8ee83ed2b7a995e355ddec223dfeea233bbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:57:41.422316  311138 start.go:364] duration metric: took 112.296µs to acquireMachinesLock for "default-k8s-diff-port-696492"
	I1123 09:57:41.422500  311138 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:57:41.422632  311138 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:57:37.251564  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	W1123 09:57:39.751746  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	I1123 09:57:42.255256  300017 node_ready.go:49] node "embed-certs-412583" is "Ready"
	I1123 09:57:42.255291  300017 node_ready.go:38] duration metric: took 11.507766088s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:57:42.255310  300017 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:42.255471  300017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:42.277737  300017 api_server.go:72] duration metric: took 12.028046262s to wait for apiserver process to appear ...
	I1123 09:57:42.277770  300017 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:42.277792  300017 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:57:42.285468  300017 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:57:42.287274  300017 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:42.287395  300017 api_server.go:131] duration metric: took 9.61454ms to wait for apiserver health ...
	I1123 09:57:42.287422  300017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:42.294433  300017 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:42.294478  300017 system_pods.go:61] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.294486  300017 system_pods.go:61] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.294493  300017 system_pods.go:61] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.294499  300017 system_pods.go:61] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.294505  300017 system_pods.go:61] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.294510  300017 system_pods.go:61] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.294515  300017 system_pods.go:61] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.294526  300017 system_pods.go:61] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.294539  300017 system_pods.go:74] duration metric: took 7.098728ms to wait for pod list to return data ...
	I1123 09:57:42.294549  300017 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:42.298321  300017 default_sa.go:45] found service account: "default"
	I1123 09:57:42.298368  300017 default_sa.go:55] duration metric: took 3.811774ms for default service account to be created ...
	I1123 09:57:42.298382  300017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:42.302807  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.302871  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.302887  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.302896  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.302903  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.302927  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.302937  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.302943  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.302954  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.303049  300017 retry.go:31] will retry after 268.599682ms: missing components: kube-dns
	I1123 09:57:42.577490  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.577531  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.577541  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.577550  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.577557  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.577563  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.577568  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.577573  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.577581  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.577600  300017 retry.go:31] will retry after 240.156475ms: missing components: kube-dns
	I1123 09:57:42.822131  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.822171  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.822177  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.822182  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.822186  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.822190  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.822194  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.822197  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.822202  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.822216  300017 retry.go:31] will retry after 383.926777ms: missing components: kube-dns
	I1123 09:57:43.211532  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:43.211575  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running
	I1123 09:57:43.211585  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:43.211592  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:43.211600  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:43.211608  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:43.211624  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:43.211635  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:43.211640  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running
	I1123 09:57:43.211650  300017 system_pods.go:126] duration metric: took 913.260942ms to wait for k8s-apps to be running ...
	I1123 09:57:43.211661  300017 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:43.211722  300017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:43.226055  300017 system_svc.go:56] duration metric: took 14.383207ms WaitForService to wait for kubelet
	I1123 09:57:43.226087  300017 kubeadm.go:587] duration metric: took 12.976401428s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:43.226108  300017 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:43.229492  300017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:43.229524  300017 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:43.229547  300017 node_conditions.go:105] duration metric: took 3.432669ms to run NodePressure ...
	I1123 09:57:43.229560  300017 start.go:242] waiting for startup goroutines ...
	I1123 09:57:43.229570  300017 start.go:247] waiting for cluster config update ...
	I1123 09:57:43.229583  300017 start.go:256] writing updated cluster config ...
	I1123 09:57:43.229975  300017 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:43.235596  300017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:43.243251  300017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.248984  300017 pod_ready.go:94] pod "coredns-66bc5c9577-8dgc7" is "Ready"
	I1123 09:57:43.249015  300017 pod_ready.go:86] duration metric: took 5.729453ms for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.251635  300017 pod_ready.go:83] waiting for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.256613  300017 pod_ready.go:94] pod "etcd-embed-certs-412583" is "Ready"
	I1123 09:57:43.256645  300017 pod_ready.go:86] duration metric: took 4.984583ms for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.259023  300017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.264242  300017 pod_ready.go:94] pod "kube-apiserver-embed-certs-412583" is "Ready"
	I1123 09:57:43.264273  300017 pod_ready.go:86] duration metric: took 5.223434ms for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.311182  300017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.642602  300017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412583" is "Ready"
	I1123 09:57:43.642637  300017 pod_ready.go:86] duration metric: took 331.426321ms for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.843849  300017 pod_ready.go:83] waiting for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.244623  300017 pod_ready.go:94] pod "kube-proxy-wm7k2" is "Ready"
	I1123 09:57:44.244667  300017 pod_ready.go:86] duration metric: took 400.77745ms for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.444056  300017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.843963  300017 pod_ready.go:94] pod "kube-scheduler-embed-certs-412583" is "Ready"
	I1123 09:57:44.843992  300017 pod_ready.go:86] duration metric: took 399.904179ms for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.844006  300017 pod_ready.go:40] duration metric: took 1.608365258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:44.891853  300017 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:44.964864  300017 out.go:179] * Done! kubectl is now configured to use "embed-certs-412583" cluster and "default" namespace by default
	W1123 09:57:41.488122  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	W1123 09:57:43.488201  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	I1123 09:57:43.988019  296642 node_ready.go:49] node "no-preload-309734" is "Ready"
	I1123 09:57:43.988052  296642 node_ready.go:38] duration metric: took 14.003534589s for node "no-preload-309734" to be "Ready" ...
	I1123 09:57:43.988069  296642 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:43.988149  296642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:44.008503  296642 api_server.go:72] duration metric: took 14.434117996s to wait for apiserver process to appear ...
	I1123 09:57:44.008530  296642 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:44.008551  296642 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:57:44.017109  296642 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:57:44.018176  296642 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:44.018200  296642 api_server.go:131] duration metric: took 9.663468ms to wait for apiserver health ...
	I1123 09:57:44.018208  296642 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:44.022287  296642 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:44.022324  296642 system_pods.go:61] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.022351  296642 system_pods.go:61] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.022364  296642 system_pods.go:61] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.022369  296642 system_pods.go:61] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.022375  296642 system_pods.go:61] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.022381  296642 system_pods.go:61] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.022387  296642 system_pods.go:61] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.022397  296642 system_pods.go:61] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.022406  296642 system_pods.go:74] duration metric: took 4.191598ms to wait for pod list to return data ...
	I1123 09:57:44.022421  296642 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:44.025262  296642 default_sa.go:45] found service account: "default"
	I1123 09:57:44.025287  296642 default_sa.go:55] duration metric: took 2.858313ms for default service account to be created ...
	I1123 09:57:44.025300  296642 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:44.028240  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.028269  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.028275  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.028281  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.028285  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.028289  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.028293  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.028296  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.028300  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.028346  296642 retry.go:31] will retry after 283.472429ms: missing components: kube-dns
	I1123 09:57:44.317300  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.317353  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.317361  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.317370  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.317376  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.317382  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.317387  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.317391  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.317397  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.317416  296642 retry.go:31] will retry after 321.7427ms: missing components: kube-dns
	I1123 09:57:44.689277  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.689322  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.689344  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.689353  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.689359  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.689366  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.689370  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.689375  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.689382  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.689411  296642 retry.go:31] will retry after 353.961831ms: missing components: kube-dns
	I1123 09:57:45.048995  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.049060  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.049069  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.049078  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.049084  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.049090  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.049099  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.049104  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.049116  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.049135  296642 retry.go:31] will retry after 412.630882ms: missing components: kube-dns
	I1123 09:57:45.607770  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.607816  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.607826  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.607836  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.607841  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.607847  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.607851  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.607856  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.607873  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.607891  296642 retry.go:31] will retry after 544.365573ms: missing components: kube-dns
	I1123 09:57:41.425584  311138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:57:41.425893  311138 start.go:159] libmachine.API.Create for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:41.425945  311138 client.go:173] LocalClient.Create starting
	I1123 09:57:41.426056  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem
	I1123 09:57:41.426100  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426121  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426185  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem
	I1123 09:57:41.426208  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426217  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426608  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:57:41.445568  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:57:41.445670  311138 network_create.go:284] running [docker network inspect default-k8s-diff-port-696492] to gather additional debugging logs...
	I1123 09:57:41.445697  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492
	W1123 09:57:41.465174  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 returned with exit code 1
	I1123 09:57:41.465216  311138 network_create.go:287] error running [docker network inspect default-k8s-diff-port-696492]: docker network inspect default-k8s-diff-port-696492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-696492 not found
	I1123 09:57:41.465236  311138 network_create.go:289] output of [docker network inspect default-k8s-diff-port-696492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-696492 not found
	
	** /stderr **
	I1123 09:57:41.465403  311138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:41.487255  311138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-de5cba392bb4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8d:f5:88:bc:8b} reservation:<nil>}
	I1123 09:57:41.488105  311138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e2eabbe85d5b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:f4:02:bd:23:31} reservation:<nil>}
	I1123 09:57:41.489037  311138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-22e47e96d08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:9e:83:f9:9f:f6} reservation:<nil>}
	I1123 09:57:41.489614  311138 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4fa988beb7cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:18:12:be:77:f6} reservation:<nil>}
	I1123 09:57:41.492079  311138 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d80820}
	I1123 09:57:41.492121  311138 network_create.go:124] attempt to create docker network default-k8s-diff-port-696492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 09:57:41.492171  311138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 default-k8s-diff-port-696492
	I1123 09:57:41.554538  311138 network_create.go:108] docker network default-k8s-diff-port-696492 192.168.85.0/24 created
	I1123 09:57:41.554588  311138 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-696492" container
	I1123 09:57:41.554664  311138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:57:41.575522  311138 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-696492 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:57:41.598058  311138 oci.go:103] Successfully created a docker volume default-k8s-diff-port-696492
	I1123 09:57:41.598141  311138 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-696492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --entrypoint /usr/bin/test -v default-k8s-diff-port-696492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:57:42.041176  311138 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-696492
	I1123 09:57:42.041254  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:42.041269  311138 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:57:42.041325  311138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:57:46.265821  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:46.265851  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running
	I1123 09:57:46.265856  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:46.265860  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:46.265863  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:46.265868  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:46.265870  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:46.265875  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:46.265879  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running
	I1123 09:57:46.265889  296642 system_pods.go:126] duration metric: took 2.240582653s to wait for k8s-apps to be running ...
	I1123 09:57:46.265903  296642 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:46.265972  296642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:46.283075  296642 system_svc.go:56] duration metric: took 17.161056ms WaitForService to wait for kubelet
	I1123 09:57:46.283105  296642 kubeadm.go:587] duration metric: took 16.70872571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:46.283128  296642 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:46.491444  296642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:46.491473  296642 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:46.491486  296642 node_conditions.go:105] duration metric: took 208.353263ms to run NodePressure ...
	I1123 09:57:46.491509  296642 start.go:242] waiting for startup goroutines ...
	I1123 09:57:46.491520  296642 start.go:247] waiting for cluster config update ...
	I1123 09:57:46.491533  296642 start.go:256] writing updated cluster config ...
	I1123 09:57:46.491804  296642 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:46.498152  296642 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:46.503240  296642 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.508998  296642 pod_ready.go:94] pod "coredns-66bc5c9577-sx25q" is "Ready"
	I1123 09:57:46.509028  296642 pod_ready.go:86] duration metric: took 5.757344ms for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.512072  296642 pod_ready.go:83] waiting for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.517750  296642 pod_ready.go:94] pod "etcd-no-preload-309734" is "Ready"
	I1123 09:57:46.517777  296642 pod_ready.go:86] duration metric: took 5.673234ms for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.520446  296642 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.525480  296642 pod_ready.go:94] pod "kube-apiserver-no-preload-309734" is "Ready"
	I1123 09:57:46.525513  296642 pod_ready.go:86] duration metric: took 5.036877ms for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.528196  296642 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.902790  296642 pod_ready.go:94] pod "kube-controller-manager-no-preload-309734" is "Ready"
	I1123 09:57:46.902815  296642 pod_ready.go:86] duration metric: took 374.588413ms for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.104263  296642 pod_ready.go:83] waiting for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.504876  296642 pod_ready.go:94] pod "kube-proxy-jpvhc" is "Ready"
	I1123 09:57:47.504999  296642 pod_ready.go:86] duration metric: took 400.696383ms for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.706275  296642 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104684  296642 pod_ready.go:94] pod "kube-scheduler-no-preload-309734" is "Ready"
	I1123 09:57:48.104720  296642 pod_ready.go:86] duration metric: took 398.41369ms for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104739  296642 pod_ready.go:40] duration metric: took 1.606531718s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:48.181507  296642 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:48.183959  296642 out.go:179] * Done! kubectl is now configured to use "no-preload-309734" cluster and "default" namespace by default
	I1123 09:57:46.740944  311138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.699532205s)
	I1123 09:57:46.741010  311138 kic.go:203] duration metric: took 4.699734046s to extract preloaded images to volume ...
	W1123 09:57:46.741179  311138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:57:46.741234  311138 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:57:46.741304  311138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:57:46.807009  311138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-696492 --name default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --network default-k8s-diff-port-696492 --ip 192.168.85.2 --volume default-k8s-diff-port-696492:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:57:47.199589  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Running}}
	I1123 09:57:47.220655  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.242623  311138 cli_runner.go:164] Run: docker exec default-k8s-diff-port-696492 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:57:47.295743  311138 oci.go:144] the created container "default-k8s-diff-port-696492" has a running status.
	I1123 09:57:47.295783  311138 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa...
	I1123 09:57:47.562280  311138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:57:47.611801  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.650055  311138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:57:47.650078  311138 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-696492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:57:47.733580  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.763876  311138 machine.go:94] provisionDockerMachine start ...
	I1123 09:57:47.763997  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.798484  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.798947  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.798969  311138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:57:47.966787  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:47.966822  311138 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-696492"
	I1123 09:57:47.966888  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.993804  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.994099  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.994117  311138 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-696492 && echo "default-k8s-diff-port-696492" | sudo tee /etc/hostname
	I1123 09:57:48.174661  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:48.174752  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.203529  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:48.203843  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:48.203881  311138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-696492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-696492/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-696492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:57:48.379959  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:57:48.380002  311138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3552/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3552/.minikube}
	I1123 09:57:48.380096  311138 ubuntu.go:190] setting up certificates
	I1123 09:57:48.380127  311138 provision.go:84] configureAuth start
	I1123 09:57:48.380222  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.421922  311138 provision.go:143] copyHostCerts
	I1123 09:57:48.422045  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem, removing ...
	I1123 09:57:48.422074  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem
	I1123 09:57:48.422196  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem (1679 bytes)
	I1123 09:57:48.422353  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem, removing ...
	I1123 09:57:48.422365  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem
	I1123 09:57:48.422399  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem (1082 bytes)
	I1123 09:57:48.422467  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem, removing ...
	I1123 09:57:48.422523  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem
	I1123 09:57:48.422566  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem (1123 bytes)
	I1123 09:57:48.422642  311138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-696492 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-696492 localhost minikube]
	I1123 09:57:48.539621  311138 provision.go:177] copyRemoteCerts
	I1123 09:57:48.539708  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:57:48.539762  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.564284  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.677154  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:57:48.704807  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 09:57:48.730566  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:57:48.755362  311138 provision.go:87] duration metric: took 375.193527ms to configureAuth
	I1123 09:57:48.755396  311138 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:57:48.755732  311138 config.go:182] Loaded profile config "default-k8s-diff-port-696492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:48.755752  311138 machine.go:97] duration metric: took 991.839554ms to provisionDockerMachine
	I1123 09:57:48.755762  311138 client.go:176] duration metric: took 7.329805852s to LocalClient.Create
	I1123 09:57:48.755786  311138 start.go:167] duration metric: took 7.329894759s to libmachine.API.Create "default-k8s-diff-port-696492"
	I1123 09:57:48.755799  311138 start.go:293] postStartSetup for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:48.755811  311138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:57:48.755868  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:57:48.755919  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.784317  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.901734  311138 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:57:48.906292  311138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:57:48.906325  311138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:57:48.906355  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/addons for local assets ...
	I1123 09:57:48.906577  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/files for local assets ...
	I1123 09:57:48.906715  311138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem -> 71092.pem in /etc/ssl/certs
	I1123 09:57:48.906835  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:57:48.917431  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:57:48.947477  311138 start.go:296] duration metric: took 191.661634ms for postStartSetup
	I1123 09:57:48.947957  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.973141  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:48.973692  311138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:57:48.973751  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.996029  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.106682  311138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:57:49.112230  311138 start.go:128] duration metric: took 7.689569326s to createHost
	I1123 09:57:49.112259  311138 start.go:83] releasing machines lock for "default-k8s-diff-port-696492", held for 7.689795634s
	I1123 09:57:49.112351  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:49.135976  311138 ssh_runner.go:195] Run: cat /version.json
	I1123 09:57:49.136033  311138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:57:49.136042  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.136113  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.160077  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.161278  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.264125  311138 ssh_runner.go:195] Run: systemctl --version
	I1123 09:57:49.329282  311138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:57:49.335197  311138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:57:49.335268  311138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:57:49.366357  311138 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:57:49.366380  311138 start.go:496] detecting cgroup driver to use...
	I1123 09:57:49.366416  311138 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:57:49.366470  311138 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 09:57:49.383235  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 09:57:49.399768  311138 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:57:49.399842  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:57:49.420125  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:57:49.442300  311138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:57:49.541498  311138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:57:49.659194  311138 docker.go:234] disabling docker service ...
	I1123 09:57:49.659272  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:57:49.682070  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:57:49.698015  311138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:57:49.798105  311138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:57:49.894575  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:57:49.911733  311138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:57:49.931314  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 09:57:49.945424  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 09:57:49.956889  311138 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 09:57:49.956953  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 09:57:49.967923  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:49.979575  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 09:57:49.991202  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:50.002918  311138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:57:50.015086  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 09:57:50.027588  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 09:57:50.038500  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 09:57:50.050508  311138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:57:50.060907  311138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:57:50.069882  311138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:57:50.169936  311138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 09:57:50.287676  311138 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 09:57:50.287747  311138 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 09:57:50.292388  311138 start.go:564] Will wait 60s for crictl version
	I1123 09:57:50.292450  311138 ssh_runner.go:195] Run: which crictl
	I1123 09:57:50.296873  311138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:57:50.325533  311138 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 09:57:50.325605  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.350974  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.381808  311138 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 09:57:50.383456  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:50.407801  311138 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 09:57:50.413000  311138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:57:50.425563  311138 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:57:50.425681  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:50.425728  311138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:57:50.458513  311138 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:57:50.458540  311138 containerd.go:534] Images already preloaded, skipping extraction
	I1123 09:57:50.458578  311138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:57:50.490466  311138 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:57:50.490488  311138 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:57:50.490496  311138 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 09:57:50.490604  311138 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-696492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:57:50.490683  311138 ssh_runner.go:195] Run: sudo crictl info
	I1123 09:57:50.519013  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:50.519047  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:50.519066  311138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:57:50.519093  311138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-696492 NodeName:default-k8s-diff-port-696492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:57:50.519249  311138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-696492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:57:50.519326  311138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:57:50.531186  311138 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:57:50.531258  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:57:50.540764  311138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 09:57:50.556738  311138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:57:50.577978  311138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1123 09:57:50.594432  311138 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:57:50.598984  311138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:57:50.611087  311138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:57:50.713969  311138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:57:50.738999  311138 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492 for IP: 192.168.85.2
	I1123 09:57:50.739022  311138 certs.go:195] generating shared ca certs ...
	I1123 09:57:50.739042  311138 certs.go:227] acquiring lock for ca certs: {Name:mkf0ec2efb8866dd9406da39e0a5f5dc931fd377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.739203  311138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key
	I1123 09:57:50.739256  311138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key
	I1123 09:57:50.739271  311138 certs.go:257] generating profile certs ...
	I1123 09:57:50.739364  311138 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key
	I1123 09:57:50.739382  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt with IP's: []
	I1123 09:57:50.902937  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt ...
	I1123 09:57:50.902975  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt: {Name:mk1be782fc73373be310b15837c277ec6685e2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.903176  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key ...
	I1123 09:57:50.903195  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key: {Name:mk6db5327a581ec783720f15c44b3730584ff35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.903326  311138 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1
	I1123 09:57:50.903367  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 09:57:51.007041  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 ...
	I1123 09:57:51.007079  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1: {Name:mk4d1a5fa60f123a8319b137c9ec74f1fa189955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.007285  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1 ...
	I1123 09:57:51.007298  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1: {Name:mkdd2b300e22459c4a8968bc56aef3e76c8f86f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.007514  311138 certs.go:382] copying /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt
	I1123 09:57:51.007636  311138 certs.go:386] copying /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key
	I1123 09:57:51.007701  311138 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key
	I1123 09:57:51.007715  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt with IP's: []
	I1123 09:57:51.045607  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt ...
	I1123 09:57:51.045642  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt: {Name:mkb29252ee6ba2f8bc8fb350259fbc7d524b689b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.045864  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key ...
	I1123 09:57:51.045887  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key: {Name:mk39c6b0c10f773b67a0a811d41c76d128d66647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.046116  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem (1338 bytes)
	W1123 09:57:51.046161  311138 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109_empty.pem, impossibly tiny 0 bytes
	I1123 09:57:51.046173  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:57:51.046197  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:57:51.046222  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:57:51.046245  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem (1679 bytes)
	I1123 09:57:51.046287  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:57:51.047046  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:57:51.071141  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:57:51.092546  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:57:51.116776  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:57:51.139235  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:57:51.160968  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:57:51.181315  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:57:51.203122  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:57:51.226401  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:57:51.252100  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem --> /usr/share/ca-certificates/7109.pem (1338 bytes)
	I1123 09:57:51.274287  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /usr/share/ca-certificates/71092.pem (1708 bytes)
	I1123 09:57:51.297105  311138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:57:51.313841  311138 ssh_runner.go:195] Run: openssl version
	I1123 09:57:51.322431  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:57:51.335037  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.339776  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.339848  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.383842  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:57:51.395820  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7109.pem && ln -fs /usr/share/ca-certificates/7109.pem /etc/ssl/certs/7109.pem"
	I1123 09:57:51.406811  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.411731  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:26 /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.411802  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.456262  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7109.pem /etc/ssl/certs/51391683.0"
	I1123 09:57:51.467466  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71092.pem && ln -fs /usr/share/ca-certificates/71092.pem /etc/ssl/certs/71092.pem"
	I1123 09:57:51.479299  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.484434  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:26 /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.484508  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.525183  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71092.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:57:51.535904  311138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:57:51.540741  311138 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:57:51.540806  311138 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:51.540889  311138 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 09:57:51.540937  311138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:57:51.573411  311138 cri.go:89] found id: ""
	I1123 09:57:51.573483  311138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:57:51.583208  311138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:57:51.592170  311138 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:57:51.592237  311138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:57:51.601224  311138 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:57:51.601243  311138 kubeadm.go:158] found existing configuration files:
	
	I1123 09:57:51.601292  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 09:57:51.610806  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:57:51.610871  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:57:51.619590  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 09:57:51.628676  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:57:51.628753  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:57:51.638382  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 09:57:51.648357  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:57:51.648452  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:57:51.657606  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 09:57:51.667094  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:57:51.667160  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:57:51.677124  311138 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:57:51.753028  311138 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 09:57:51.832851  311138 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	00150dfde10c5       56cc512116c8f       11 seconds ago      Running             busybox                   0                   387dc93d0a8cf       busybox                                      default
	db362a96711e6       52546a367cc9e       17 seconds ago      Running             coredns                   0                   f79b2dece7e26       coredns-66bc5c9577-8dgc7                     kube-system
	01f6da8fb3f7d       6e38f40d628db       17 seconds ago      Running             storage-provisioner       0                   616ba95f738c5       storage-provisioner                          kube-system
	de43573b10ccd       409467f978b4a       28 seconds ago      Running             kindnet-cni               0                   a3928ac5eaafb       kindnet-f76c2                                kube-system
	c59b716fcc34d       fc25172553d79       29 seconds ago      Running             kube-proxy                0                   9f1049e06b7be       kube-proxy-wm7k2                             kube-system
	ea002215dc5ff       7dd6aaa1717ab       42 seconds ago      Running             kube-scheduler            0                   8b2fee9d2694f       kube-scheduler-embed-certs-412583            kube-system
	786d0436a85fd       5f1f5298c888d       42 seconds ago      Running             etcd                      0                   8100c8a61784d       etcd-embed-certs-412583                      kube-system
	72aa47eb89fbb       c3994bc696102       42 seconds ago      Running             kube-apiserver            0                   179cb11cf0ad3       kube-apiserver-embed-certs-412583            kube-system
	0275433c40df6       c80c8dbafe7dd       42 seconds ago      Running             kube-controller-manager   0                   8a49c491842a3       kube-controller-manager-embed-certs-412583   kube-system
	
	
	==> containerd <==
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.263149253Z" level=info msg="connecting to shim 01f6da8fb3f7dfb36a0d1bf7ac34fa2c7715a85d4db29e51e680371cf976de98" address="unix:///run/containerd/s/a7b6f230a299bda0a1f0d256e0bd0247043fa02e595c6d77c8c5ff35955b1815" protocol=ttrpc version=3
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.264257110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8dgc7,Uid:f685cc03-30df-4119-9d66-0e808c2d3c93,Namespace:kube-system,Attempt:0,} returns sandbox id \"f79b2dece7e264261986c54a3329a94f4a2f31499e5aa8db86f0bd2ff6e4e3cc\""
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.277437170Z" level=info msg="CreateContainer within sandbox \"f79b2dece7e264261986c54a3329a94f4a2f31499e5aa8db86f0bd2ff6e4e3cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.287168811Z" level=info msg="Container db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.297178125Z" level=info msg="CreateContainer within sandbox \"f79b2dece7e264261986c54a3329a94f4a2f31499e5aa8db86f0bd2ff6e4e3cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4\""
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.298310630Z" level=info msg="StartContainer for \"db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4\""
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.301376554Z" level=info msg="connecting to shim db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4" address="unix:///run/containerd/s/193a80da954a991752534a897a9195e52bba571ad363258772fc97fd3f38dac6" protocol=ttrpc version=3
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.336587190Z" level=info msg="StartContainer for \"01f6da8fb3f7dfb36a0d1bf7ac34fa2c7715a85d4db29e51e680371cf976de98\" returns successfully"
	Nov 23 09:57:42 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:42.375298903Z" level=info msg="StartContainer for \"db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4\" returns successfully"
	Nov 23 09:57:45 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:45.977288983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:37a908eb-6709-4200-8522-c8fe9a550046,Namespace:default,Attempt:0,}"
	Nov 23 09:57:46 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:46.660292281Z" level=info msg="connecting to shim 387dc93d0a8cf354ec95ce64993a0addad111dcef088fbc67260f0afeb734d60" address="unix:///run/containerd/s/fdaadc5b3798fd4dbeaa013b873623cdfb02e487051186fb770f84af6b6bfa04" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:57:46 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:46.736677153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:37a908eb-6709-4200-8522-c8fe9a550046,Namespace:default,Attempt:0,} returns sandbox id \"387dc93d0a8cf354ec95ce64993a0addad111dcef088fbc67260f0afeb734d60\""
	Nov 23 09:57:46 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:46.739095430Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.915793673Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.916930025Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.918571491Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.921809864Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.922431651Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.183293897s"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.922489800Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.927784863Z" level=info msg="CreateContainer within sandbox \"387dc93d0a8cf354ec95ce64993a0addad111dcef088fbc67260f0afeb734d60\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.937973254Z" level=info msg="Container 00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.948097816Z" level=info msg="CreateContainer within sandbox \"387dc93d0a8cf354ec95ce64993a0addad111dcef088fbc67260f0afeb734d60\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223\""
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.948875044Z" level=info msg="StartContainer for \"00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223\""
	Nov 23 09:57:48 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:48.949990703Z" level=info msg="connecting to shim 00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223" address="unix:///run/containerd/s/fdaadc5b3798fd4dbeaa013b873623cdfb02e487051186fb770f84af6b6bfa04" protocol=ttrpc version=3
	Nov 23 09:57:49 embed-certs-412583 containerd[662]: time="2025-11-23T09:57:49.019759190Z" level=info msg="StartContainer for \"00150dfde10c51a55b91523dc6f606c6abbf087ac6d6bbe89494e33ad99c3223\" returns successfully"
	
	
	==> coredns [db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49636 - 8471 "HINFO IN 3150291320313079176.8990028570470516833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.165694981s
	
	
	==> describe nodes <==
	Name:               embed-certs-412583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-412583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-412583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_57_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:57:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-412583
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:57:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:57:54 +0000   Sun, 23 Nov 2025 09:57:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:57:54 +0000   Sun, 23 Nov 2025 09:57:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:57:54 +0000   Sun, 23 Nov 2025 09:57:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:57:54 +0000   Sun, 23 Nov 2025 09:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-412583
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9da7e891-3f25-4983-8fba-6666bb3db318
	  Boot ID:                    e4c4d39b-bebd-4037-9237-26b945dbe084
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-8dgc7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-embed-certs-412583                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-f76c2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-embed-certs-412583             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-embed-certs-412583    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-wm7k2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-embed-certs-412583             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 37s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  37s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  36s   kubelet          Node embed-certs-412583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s   kubelet          Node embed-certs-412583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s   kubelet          Node embed-certs-412583 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node embed-certs-412583 event: Registered Node embed-certs-412583 in Controller
	  Normal  NodeReady                19s   kubelet          Node embed-certs-412583 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.288463] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[Nov23 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.195562] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +5.912917] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 c0 1c 98 33 a9 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.002091] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 47 bd bf 96 57 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[  +4.460318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 85 b9 91 f8 a4 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +2.904694] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	[Nov23 09:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 48 bf 8b d1 fc 08 06
	[  +0.000931] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	
	
	==> etcd [786d0436a85fd77d6e60804d917a286d3d71195fdb79aff7ac861499ed514dbf] <==
	{"level":"warn","ts":"2025-11-23T09:57:20.187808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.202358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.216919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.236022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.249228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.265771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.277721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.287910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.301044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.317683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.333908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.350810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.368555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:20.457172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45970","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:57:45.481895Z","caller":"traceutil/trace.go:172","msg":"trace[2106743918] linearizableReadLoop","detail":"{readStateIndex:441; appliedIndex:441; }","duration":"149.262873ms","start":"2025-11-23T09:57:45.332612Z","end":"2025-11-23T09:57:45.481875Z","steps":["trace[2106743918] 'read index received'  (duration: 149.255305ms)","trace[2106743918] 'applied index is now lower than readState.Index'  (duration: 6.205µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:45.595660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.52134ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:45.595740Z","caller":"traceutil/trace.go:172","msg":"trace[443334218] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:425; }","duration":"200.603744ms","start":"2025-11-23T09:57:45.395120Z","end":"2025-11-23T09:57:45.595724Z","steps":["trace[443334218] 'range keys from in-memory index tree'  (duration: 200.48665ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:45.595661Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.034946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:45.595792Z","caller":"traceutil/trace.go:172","msg":"trace[909354569] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:425; }","duration":"263.192733ms","start":"2025-11-23T09:57:45.332591Z","end":"2025-11-23T09:57:45.595784Z","steps":["trace[909354569] 'agreement among raft nodes before linearized reading'  (duration: 149.371763ms)","trace[909354569] 'range keys from in-memory index tree'  (duration: 113.623338ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:45.596376Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.821874ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790208425156421 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:425 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:1260 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:57:45.596515Z","caller":"traceutil/trace.go:172","msg":"trace[1557009709] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"273.56361ms","start":"2025-11-23T09:57:45.322937Z","end":"2025-11-23T09:57:45.596501Z","steps":["trace[1557009709] 'process raft request'  (duration: 159.045437ms)","trace[1557009709] 'compare'  (duration: 113.59463ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:57:45.790683Z","caller":"traceutil/trace.go:172","msg":"trace[1096355566] linearizableReadLoop","detail":"{readStateIndex:443; appliedIndex:443; }","duration":"149.60145ms","start":"2025-11-23T09:57:45.641048Z","end":"2025-11-23T09:57:45.790649Z","steps":["trace[1096355566] 'read index received'  (duration: 149.590767ms)","trace[1096355566] 'applied index is now lower than readState.Index'  (duration: 9.211µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:45.801745Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.673197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:45.801813Z","caller":"traceutil/trace.go:172","msg":"trace[26794502] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:427; }","duration":"160.761222ms","start":"2025-11-23T09:57:45.641038Z","end":"2025-11-23T09:57:45.801800Z","steps":["trace[26794502] 'agreement among raft nodes before linearized reading'  (duration: 149.705905ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:45.801900Z","caller":"traceutil/trace.go:172","msg":"trace[1536416025] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"171.194558ms","start":"2025-11-23T09:57:45.630689Z","end":"2025-11-23T09:57:45.801883Z","steps":["trace[1536416025] 'process raft request'  (duration: 160.021394ms)","trace[1536416025] 'compare'  (duration: 11.07365ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:58:00 up 40 min,  0 user,  load average: 4.93, 4.11, 2.62
	Linux embed-certs-412583 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [de43573b10ccd2db93907531b927156400b38e1ccc072df4694f86271eadb2a7] <==
	I1123 09:57:31.366093       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:57:31.366394       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 09:57:31.366574       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:57:31.366591       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:57:31.366613       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:57:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:57:31.662809       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:57:31.662848       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:57:31.662910       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:57:31.663873       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:57:32.132788       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:57:32.132836       1 metrics.go:72] Registering metrics
	I1123 09:57:32.132932       1 controller.go:711] "Syncing nftables rules"
	I1123 09:57:41.663029       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:57:41.663106       1 main.go:301] handling current node
	I1123 09:57:51.663064       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:57:51.663097       1 main.go:301] handling current node
	
	
	==> kube-apiserver [72aa47eb89fbb59da47429e762a23f4e68077fe27b50deb7d4860da7370e5f9b] <==
	I1123 09:57:21.236407       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:57:21.236459       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:57:21.251314       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:21.255073       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:57:21.276590       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:21.280570       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:57:21.440092       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:57:22.162976       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:57:22.171528       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:57:22.171550       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:57:23.042887       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:57:23.156762       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:57:23.262042       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:57:23.270761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 09:57:23.272071       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:57:23.277943       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:57:24.058407       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:57:24.064939       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:57:24.078625       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:57:24.088680       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:57:29.356169       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:29.361491       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:29.855754       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:57:30.203249       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:57:56.444276       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:46964: use of closed network connection
	
	
	==> kube-controller-manager [0275433c40df693012ccd198e9424273105899b21f0e3e75bc2219ef022bdec2] <==
	I1123 09:57:29.151404       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:57:29.151569       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:57:29.151629       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:57:29.151623       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:57:29.151898       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:57:29.151953       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:57:29.152073       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:57:29.152128       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:57:29.152142       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:57:29.152152       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:57:29.153392       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 09:57:29.153531       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 09:57:29.153605       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 09:57:29.153612       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:57:29.153619       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:57:29.153797       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:57:29.155189       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:57:29.158317       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:57:29.166804       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:57:29.169079       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-412583" podCIDRs=["10.244.0.0/24"]
	I1123 09:57:29.178972       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:57:29.199943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:57:29.199967       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:57:29.199979       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:57:44.150263       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c59b716fcc34de4cd73575b55a3765828129eb26a8da3f4e32971f259a35d5b9] <==
	I1123 09:57:30.916623       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:57:30.988032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:57:31.088790       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:57:31.088849       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 09:57:31.088971       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:57:31.116731       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:57:31.116825       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:57:31.123212       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:57:31.123727       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:57:31.123771       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:57:31.127038       1 config.go:200] "Starting service config controller"
	I1123 09:57:31.127074       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:57:31.127211       1 config.go:309] "Starting node config controller"
	I1123 09:57:31.127237       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:57:31.127260       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:57:31.127265       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:57:31.127261       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:57:31.127310       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:57:31.227299       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:57:31.227378       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:57:31.227389       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:57:31.227411       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ea002215dc5ff9de708bfb501c13731db3b837342413eaa850d2bdaa9db3326b] <==
	E1123 09:57:21.199364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:57:21.199528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:57:21.199629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:57:21.199733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:57:21.199855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:57:21.199851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:57:22.065614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:57:22.077308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:57:22.103659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:57:22.156471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:57:22.161197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:57:22.215761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:57:22.276078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:57:22.357615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:57:22.371618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:57:22.401762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:57:22.419069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:57:22.477267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:57:22.516489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:57:22.518696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:57:22.565990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:57:22.586754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:57:22.638652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:57:22.714174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1123 09:57:25.092978       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:57:24 embed-certs-412583 kubelet[1414]: E1123 09:57:24.939085    1414 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-embed-certs-412583\" already exists" pod="kube-system/kube-controller-manager-embed-certs-412583"
	Nov 23 09:57:24 embed-certs-412583 kubelet[1414]: I1123 09:57:24.980038    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-412583" podStartSLOduration=0.980011304 podStartE2EDuration="980.011304ms" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:24.954274858 +0000 UTC m=+1.143838257" watchObservedRunningTime="2025-11-23 09:57:24.980011304 +0000 UTC m=+1.169574704"
	Nov 23 09:57:25 embed-certs-412583 kubelet[1414]: I1123 09:57:25.006846    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-412583" podStartSLOduration=1.006606971 podStartE2EDuration="1.006606971s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:24.987760256 +0000 UTC m=+1.177323653" watchObservedRunningTime="2025-11-23 09:57:25.006606971 +0000 UTC m=+1.196170387"
	Nov 23 09:57:25 embed-certs-412583 kubelet[1414]: I1123 09:57:25.007093    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-412583" podStartSLOduration=3.007077201 podStartE2EDuration="3.007077201s" podCreationTimestamp="2025-11-23 09:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.006544168 +0000 UTC m=+1.196107566" watchObservedRunningTime="2025-11-23 09:57:25.007077201 +0000 UTC m=+1.196640602"
	Nov 23 09:57:25 embed-certs-412583 kubelet[1414]: I1123 09:57:25.045418    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-412583" podStartSLOduration=1.04539165 podStartE2EDuration="1.04539165s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.023834862 +0000 UTC m=+1.213398262" watchObservedRunningTime="2025-11-23 09:57:25.04539165 +0000 UTC m=+1.234955049"
	Nov 23 09:57:29 embed-certs-412583 kubelet[1414]: I1123 09:57:29.207810    1414 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:57:29 embed-certs-412583 kubelet[1414]: I1123 09:57:29.209108    1414 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.337770    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/16967e76-b4bf-4a99-aab9-d7f76cbb0830-cni-cfg\") pod \"kindnet-f76c2\" (UID: \"16967e76-b4bf-4a99-aab9-d7f76cbb0830\") " pod="kube-system/kindnet-f76c2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.338143    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9-xtables-lock\") pod \"kube-proxy-wm7k2\" (UID: \"120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9\") " pod="kube-system/kube-proxy-wm7k2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.340545    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16967e76-b4bf-4a99-aab9-d7f76cbb0830-lib-modules\") pod \"kindnet-f76c2\" (UID: \"16967e76-b4bf-4a99-aab9-d7f76cbb0830\") " pod="kube-system/kindnet-f76c2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.342812    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfk2\" (UniqueName: \"kubernetes.io/projected/16967e76-b4bf-4a99-aab9-d7f76cbb0830-kube-api-access-8cfk2\") pod \"kindnet-f76c2\" (UID: \"16967e76-b4bf-4a99-aab9-d7f76cbb0830\") " pod="kube-system/kindnet-f76c2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.343058    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9-kube-proxy\") pod \"kube-proxy-wm7k2\" (UID: \"120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9\") " pod="kube-system/kube-proxy-wm7k2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.343664    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2krm\" (UniqueName: \"kubernetes.io/projected/120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9-kube-api-access-w2krm\") pod \"kube-proxy-wm7k2\" (UID: \"120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9\") " pod="kube-system/kube-proxy-wm7k2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.344587    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16967e76-b4bf-4a99-aab9-d7f76cbb0830-xtables-lock\") pod \"kindnet-f76c2\" (UID: \"16967e76-b4bf-4a99-aab9-d7f76cbb0830\") " pod="kube-system/kindnet-f76c2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.344818    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9-lib-modules\") pod \"kube-proxy-wm7k2\" (UID: \"120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9\") " pod="kube-system/kube-proxy-wm7k2"
	Nov 23 09:57:30 embed-certs-412583 kubelet[1414]: I1123 09:57:30.976417    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wm7k2" podStartSLOduration=0.97639176 podStartE2EDuration="976.39176ms" podCreationTimestamp="2025-11-23 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:30.97175995 +0000 UTC m=+7.161323349" watchObservedRunningTime="2025-11-23 09:57:30.97639176 +0000 UTC m=+7.165955158"
	Nov 23 09:57:31 embed-certs-412583 kubelet[1414]: I1123 09:57:31.965243    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-f76c2" podStartSLOduration=1.965220701 podStartE2EDuration="1.965220701s" podCreationTimestamp="2025-11-23 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:31.965049893 +0000 UTC m=+8.154613292" watchObservedRunningTime="2025-11-23 09:57:31.965220701 +0000 UTC m=+8.154784100"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.764467    1414 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.921311    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pczs\" (UniqueName: \"kubernetes.io/projected/f685cc03-30df-4119-9d66-0e808c2d3c93-kube-api-access-4pczs\") pod \"coredns-66bc5c9577-8dgc7\" (UID: \"f685cc03-30df-4119-9d66-0e808c2d3c93\") " pod="kube-system/coredns-66bc5c9577-8dgc7"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.921501    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f685cc03-30df-4119-9d66-0e808c2d3c93-config-volume\") pod \"coredns-66bc5c9577-8dgc7\" (UID: \"f685cc03-30df-4119-9d66-0e808c2d3c93\") " pod="kube-system/coredns-66bc5c9577-8dgc7"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.921540    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dcf16920-e30b-42ab-8195-4ef946498d0f-tmp\") pod \"storage-provisioner\" (UID: \"dcf16920-e30b-42ab-8195-4ef946498d0f\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:41 embed-certs-412583 kubelet[1414]: I1123 09:57:41.921560    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6rhp\" (UniqueName: \"kubernetes.io/projected/dcf16920-e30b-42ab-8195-4ef946498d0f-kube-api-access-z6rhp\") pod \"storage-provisioner\" (UID: \"dcf16920-e30b-42ab-8195-4ef946498d0f\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:43 embed-certs-412583 kubelet[1414]: I1123 09:57:43.000608    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8dgc7" podStartSLOduration=13.000583929 podStartE2EDuration="13.000583929s" podCreationTimestamp="2025-11-23 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:43.000544713 +0000 UTC m=+19.190108137" watchObservedRunningTime="2025-11-23 09:57:43.000583929 +0000 UTC m=+19.190147342"
	Nov 23 09:57:43 embed-certs-412583 kubelet[1414]: I1123 09:57:43.030945    1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.030922513 podStartE2EDuration="12.030922513s" podCreationTimestamp="2025-11-23 09:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:43.014461805 +0000 UTC m=+19.204025204" watchObservedRunningTime="2025-11-23 09:57:43.030922513 +0000 UTC m=+19.220485912"
	Nov 23 09:57:45 embed-certs-412583 kubelet[1414]: I1123 09:57:45.747146    1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4vb\" (UniqueName: \"kubernetes.io/projected/37a908eb-6709-4200-8522-c8fe9a550046-kube-api-access-8q4vb\") pod \"busybox\" (UID: \"37a908eb-6709-4200-8522-c8fe9a550046\") " pod="default/busybox"
	
	
	==> storage-provisioner [01f6da8fb3f7dfb36a0d1bf7ac34fa2c7715a85d4db29e51e680371cf976de98] <==
	W1123 09:57:42.365209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:57:42.365571       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:57:42.365706       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4cb99382-7f2c-4efe-9082-eae1f39758b2", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-412583_c2d51ccd-86cc-409b-a8dd-4eb050378ace became leader
	I1123 09:57:42.365777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-412583_c2d51ccd-86cc-409b-a8dd-4eb050378ace!
	W1123 09:57:42.369067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:42.373535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:57:42.466312       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-412583_c2d51ccd-86cc-409b-a8dd-4eb050378ace!
	W1123 09:57:44.377239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:44.386889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:46.390510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:46.425061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:48.433040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:48.445234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:50.449853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:50.456157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:52.460081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:52.466504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:54.470173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:54.475406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:56.478726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:56.484037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:58.489776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:58.498794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:00.503288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:00.516470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412583 -n embed-certs-412583
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-412583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (16.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-309734 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8d46a619-a382-4103-900c-1ce2911f6fb9] Pending
helpers_test.go:352: "busybox" [8d46a619-a382-4103-900c-1ce2911f6fb9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8d46a619-a382-4103-900c-1ce2911f6fb9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003866114s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-309734 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-309734
helpers_test.go:243: (dbg) docker inspect no-preload-309734:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9",
	        "Created": "2025-11-23T09:56:51.96900087Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:56:52.019118184Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9/hosts",
	        "LogPath": "/var/lib/docker/containers/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9-json.log",
	        "Name": "/no-preload-309734",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-309734:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-309734",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9",
	                "LowerDir": "/var/lib/docker/overlay2/f3ff7e283981f4ebe6f99aedbe0f6c8c431e57bbff30ff8b7adc33fdfcb8e86f-init/diff:/var/lib/docker/overlay2/c80a0dfdb81b7753b0a82e2bc6458805cbbad0a9ce5819c63e1d9b7b71ba226c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f3ff7e283981f4ebe6f99aedbe0f6c8c431e57bbff30ff8b7adc33fdfcb8e86f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f3ff7e283981f4ebe6f99aedbe0f6c8c431e57bbff30ff8b7adc33fdfcb8e86f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f3ff7e283981f4ebe6f99aedbe0f6c8c431e57bbff30ff8b7adc33fdfcb8e86f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-309734",
	                "Source": "/var/lib/docker/volumes/no-preload-309734/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-309734",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-309734",
	                "name.minikube.sigs.k8s.io": "no-preload-309734",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cca13c5a748b4620b9b193a9f5361761307d6436457d8fdb1ab5b5a8656d14c4",
	            "SandboxKey": "/var/run/docker/netns/cca13c5a748b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-309734": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d57442761c92d8836fe3467356c458dc3c295dcf4c4aec369e66e2eb0689f5e",
	                    "EndpointID": "6607f39c95da6d4d22fb81caf588075ebcbfbedd8774d2fa0c442ec6a9a0af2c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "42:fb:18:a2:0c:6b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-309734",
	                        "124fd8be1e7f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-309734 -n no-preload-309734
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-309734 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-309734 logs -n 25: (1.243375367s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-676928 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /var/lib/kubelet/config.yaml                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status docker --all --full --no-pager                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat docker --no-pager                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/docker/daemon.json                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo docker system info                                                                                                                                       │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl status cri-docker --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat cri-docker --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cri-dockerd --version                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status containerd --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl cat containerd --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /lib/systemd/system/containerd.service                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/containerd/config.toml                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo containerd config dump                                                                                                                                   │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status crio --all --full --no-pager                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat crio --no-pager                                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                  │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo crio config                                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p bridge-676928                                                                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p disable-driver-mounts-178820                                                                                                                                                │ disable-driver-mounts-178820 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ start   │ -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ default-k8s-diff-port-696492 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-709593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                   │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ stop    │ -p old-k8s-version-709593 --alsologtostderr -v=3                                                                                                                               │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:57:41
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:57:41.194019  311138 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:57:41.194298  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194308  311138 out.go:374] Setting ErrFile to fd 2...
	I1123 09:57:41.194312  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194606  311138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:57:41.195144  311138 out.go:368] Setting JSON to false
	I1123 09:57:41.196591  311138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2400,"bootTime":1763889461,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:57:41.196668  311138 start.go:143] virtualization: kvm guest
	I1123 09:57:41.199167  311138 out.go:179] * [default-k8s-diff-port-696492] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:57:41.201043  311138 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:57:41.201094  311138 notify.go:221] Checking for updates...
	I1123 09:57:41.204382  311138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:57:41.206017  311138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:57:41.207959  311138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:57:41.209794  311138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:57:41.211809  311138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:57:41.214009  311138 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214105  311138 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214180  311138 config.go:182] Loaded profile config "old-k8s-version-709593": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 09:57:41.214271  311138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:57:41.241306  311138 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:57:41.241474  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.312013  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.299959199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.312116  311138 docker.go:319] overlay module found
	I1123 09:57:41.314243  311138 out.go:179] * Using the docker driver based on user configuration
	I1123 09:57:41.316002  311138 start.go:309] selected driver: docker
	I1123 09:57:41.316024  311138 start.go:927] validating driver "docker" against <nil>
	I1123 09:57:41.316037  311138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:57:41.316751  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.385595  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.373759534 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.385794  311138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:57:41.386023  311138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:41.388087  311138 out.go:179] * Using Docker driver with root privileges
	I1123 09:57:41.389651  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:41.389725  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:41.389738  311138 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:57:41.389816  311138 start.go:353] cluster config:
	{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:41.391556  311138 out.go:179] * Starting "default-k8s-diff-port-696492" primary control-plane node in "default-k8s-diff-port-696492" cluster
	I1123 09:57:41.392982  311138 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:57:41.394476  311138 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:57:41.395978  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:41.396028  311138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 09:57:41.396036  311138 cache.go:65] Caching tarball of preloaded images
	I1123 09:57:41.396075  311138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:57:41.396157  311138 preload.go:238] Found /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 09:57:41.396175  311138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:57:41.396320  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:41.396374  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json: {Name:mk3b81d8fd8561a54828649e3e510565221995b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:41.422089  311138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:57:41.422112  311138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:57:41.422133  311138 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:57:41.422177  311138 start.go:360] acquireMachinesLock for default-k8s-diff-port-696492: {Name:mkc8ee83ed2b7a995e355ddec223dfeea233bbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:57:41.422316  311138 start.go:364] duration metric: took 112.296µs to acquireMachinesLock for "default-k8s-diff-port-696492"
	I1123 09:57:41.422500  311138 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:57:41.422632  311138 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:57:37.251564  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	W1123 09:57:39.751746  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	I1123 09:57:42.255256  300017 node_ready.go:49] node "embed-certs-412583" is "Ready"
	I1123 09:57:42.255291  300017 node_ready.go:38] duration metric: took 11.507766088s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:57:42.255310  300017 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:42.255471  300017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:42.277737  300017 api_server.go:72] duration metric: took 12.028046262s to wait for apiserver process to appear ...
	I1123 09:57:42.277770  300017 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:42.277792  300017 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:57:42.285468  300017 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:57:42.287274  300017 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:42.287395  300017 api_server.go:131] duration metric: took 9.61454ms to wait for apiserver health ...
	I1123 09:57:42.287422  300017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:42.294433  300017 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:42.294478  300017 system_pods.go:61] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.294486  300017 system_pods.go:61] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.294493  300017 system_pods.go:61] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.294499  300017 system_pods.go:61] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.294505  300017 system_pods.go:61] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.294510  300017 system_pods.go:61] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.294515  300017 system_pods.go:61] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.294526  300017 system_pods.go:61] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.294539  300017 system_pods.go:74] duration metric: took 7.098728ms to wait for pod list to return data ...
	I1123 09:57:42.294549  300017 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:42.298321  300017 default_sa.go:45] found service account: "default"
	I1123 09:57:42.298368  300017 default_sa.go:55] duration metric: took 3.811774ms for default service account to be created ...
	I1123 09:57:42.298382  300017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:42.302807  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.302871  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.302887  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.302896  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.302903  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.302927  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.302937  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.302943  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.302954  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.303049  300017 retry.go:31] will retry after 268.599682ms: missing components: kube-dns
	I1123 09:57:42.577490  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.577531  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.577541  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.577550  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.577557  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.577563  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.577568  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.577573  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.577581  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.577600  300017 retry.go:31] will retry after 240.156475ms: missing components: kube-dns
	I1123 09:57:42.822131  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.822171  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.822177  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.822182  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.822186  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.822190  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.822194  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.822197  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.822202  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.822216  300017 retry.go:31] will retry after 383.926777ms: missing components: kube-dns
	I1123 09:57:43.211532  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:43.211575  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running
	I1123 09:57:43.211585  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:43.211592  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:43.211600  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:43.211608  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:43.211624  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:43.211635  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:43.211640  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running
	I1123 09:57:43.211650  300017 system_pods.go:126] duration metric: took 913.260942ms to wait for k8s-apps to be running ...
	I1123 09:57:43.211661  300017 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:43.211722  300017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:43.226055  300017 system_svc.go:56] duration metric: took 14.383207ms WaitForService to wait for kubelet
	I1123 09:57:43.226087  300017 kubeadm.go:587] duration metric: took 12.976401428s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:43.226108  300017 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:43.229492  300017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:43.229524  300017 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:43.229547  300017 node_conditions.go:105] duration metric: took 3.432669ms to run NodePressure ...
	I1123 09:57:43.229560  300017 start.go:242] waiting for startup goroutines ...
	I1123 09:57:43.229570  300017 start.go:247] waiting for cluster config update ...
	I1123 09:57:43.229583  300017 start.go:256] writing updated cluster config ...
	I1123 09:57:43.229975  300017 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:43.235596  300017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:43.243251  300017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.248984  300017 pod_ready.go:94] pod "coredns-66bc5c9577-8dgc7" is "Ready"
	I1123 09:57:43.249015  300017 pod_ready.go:86] duration metric: took 5.729453ms for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.251635  300017 pod_ready.go:83] waiting for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.256613  300017 pod_ready.go:94] pod "etcd-embed-certs-412583" is "Ready"
	I1123 09:57:43.256645  300017 pod_ready.go:86] duration metric: took 4.984583ms for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.259023  300017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.264242  300017 pod_ready.go:94] pod "kube-apiserver-embed-certs-412583" is "Ready"
	I1123 09:57:43.264273  300017 pod_ready.go:86] duration metric: took 5.223434ms for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.311182  300017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.642602  300017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412583" is "Ready"
	I1123 09:57:43.642637  300017 pod_ready.go:86] duration metric: took 331.426321ms for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.843849  300017 pod_ready.go:83] waiting for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.244623  300017 pod_ready.go:94] pod "kube-proxy-wm7k2" is "Ready"
	I1123 09:57:44.244667  300017 pod_ready.go:86] duration metric: took 400.77745ms for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.444056  300017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.843963  300017 pod_ready.go:94] pod "kube-scheduler-embed-certs-412583" is "Ready"
	I1123 09:57:44.843992  300017 pod_ready.go:86] duration metric: took 399.904179ms for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.844006  300017 pod_ready.go:40] duration metric: took 1.608365258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:44.891853  300017 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:44.964864  300017 out.go:179] * Done! kubectl is now configured to use "embed-certs-412583" cluster and "default" namespace by default
	W1123 09:57:41.488122  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	W1123 09:57:43.488201  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	I1123 09:57:43.988019  296642 node_ready.go:49] node "no-preload-309734" is "Ready"
	I1123 09:57:43.988052  296642 node_ready.go:38] duration metric: took 14.003534589s for node "no-preload-309734" to be "Ready" ...
	I1123 09:57:43.988069  296642 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:43.988149  296642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:44.008503  296642 api_server.go:72] duration metric: took 14.434117996s to wait for apiserver process to appear ...
	I1123 09:57:44.008530  296642 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:44.008551  296642 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:57:44.017109  296642 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:57:44.018176  296642 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:44.018200  296642 api_server.go:131] duration metric: took 9.663468ms to wait for apiserver health ...
	I1123 09:57:44.018208  296642 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:44.022287  296642 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:44.022324  296642 system_pods.go:61] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.022351  296642 system_pods.go:61] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.022364  296642 system_pods.go:61] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.022369  296642 system_pods.go:61] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.022375  296642 system_pods.go:61] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.022381  296642 system_pods.go:61] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.022387  296642 system_pods.go:61] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.022397  296642 system_pods.go:61] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.022406  296642 system_pods.go:74] duration metric: took 4.191598ms to wait for pod list to return data ...
	I1123 09:57:44.022421  296642 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:44.025262  296642 default_sa.go:45] found service account: "default"
	I1123 09:57:44.025287  296642 default_sa.go:55] duration metric: took 2.858313ms for default service account to be created ...
	I1123 09:57:44.025300  296642 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:44.028240  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.028269  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.028275  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.028281  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.028285  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.028289  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.028293  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.028296  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.028300  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.028346  296642 retry.go:31] will retry after 283.472429ms: missing components: kube-dns
	I1123 09:57:44.317300  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.317353  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.317361  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.317370  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.317376  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.317382  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.317387  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.317391  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.317397  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.317416  296642 retry.go:31] will retry after 321.7427ms: missing components: kube-dns
	I1123 09:57:44.689277  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.689322  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.689344  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.689353  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.689359  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.689366  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.689370  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.689375  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.689382  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.689411  296642 retry.go:31] will retry after 353.961831ms: missing components: kube-dns
	I1123 09:57:45.048995  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.049060  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.049069  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.049078  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.049084  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.049090  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.049099  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.049104  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.049116  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.049135  296642 retry.go:31] will retry after 412.630882ms: missing components: kube-dns
	I1123 09:57:45.607770  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.607816  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.607826  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.607836  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.607841  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.607847  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.607851  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.607856  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.607873  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.607891  296642 retry.go:31] will retry after 544.365573ms: missing components: kube-dns
	I1123 09:57:41.425584  311138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:57:41.425893  311138 start.go:159] libmachine.API.Create for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:41.425945  311138 client.go:173] LocalClient.Create starting
	I1123 09:57:41.426056  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem
	I1123 09:57:41.426100  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426121  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426185  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem
	I1123 09:57:41.426208  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426217  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426608  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:57:41.445568  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:57:41.445670  311138 network_create.go:284] running [docker network inspect default-k8s-diff-port-696492] to gather additional debugging logs...
	I1123 09:57:41.445697  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492
	W1123 09:57:41.465174  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 returned with exit code 1
	I1123 09:57:41.465216  311138 network_create.go:287] error running [docker network inspect default-k8s-diff-port-696492]: docker network inspect default-k8s-diff-port-696492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-696492 not found
	I1123 09:57:41.465236  311138 network_create.go:289] output of [docker network inspect default-k8s-diff-port-696492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-696492 not found
	
	** /stderr **
	I1123 09:57:41.465403  311138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:41.487255  311138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-de5cba392bb4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8d:f5:88:bc:8b} reservation:<nil>}
	I1123 09:57:41.488105  311138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e2eabbe85d5b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:f4:02:bd:23:31} reservation:<nil>}
	I1123 09:57:41.489037  311138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-22e47e96d08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:9e:83:f9:9f:f6} reservation:<nil>}
	I1123 09:57:41.489614  311138 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4fa988beb7cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:18:12:be:77:f6} reservation:<nil>}
	I1123 09:57:41.492079  311138 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d80820}
	I1123 09:57:41.492121  311138 network_create.go:124] attempt to create docker network default-k8s-diff-port-696492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 09:57:41.492171  311138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 default-k8s-diff-port-696492
	I1123 09:57:41.554538  311138 network_create.go:108] docker network default-k8s-diff-port-696492 192.168.85.0/24 created
	I1123 09:57:41.554588  311138 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-696492" container
	I1123 09:57:41.554664  311138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:57:41.575522  311138 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-696492 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:57:41.598058  311138 oci.go:103] Successfully created a docker volume default-k8s-diff-port-696492
	I1123 09:57:41.598141  311138 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-696492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --entrypoint /usr/bin/test -v default-k8s-diff-port-696492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:57:42.041176  311138 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-696492
	I1123 09:57:42.041254  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:42.041269  311138 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:57:42.041325  311138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:57:46.265821  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:46.265851  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running
	I1123 09:57:46.265856  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:46.265860  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:46.265863  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:46.265868  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:46.265870  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:46.265875  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:46.265879  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running
	I1123 09:57:46.265889  296642 system_pods.go:126] duration metric: took 2.240582653s to wait for k8s-apps to be running ...
	I1123 09:57:46.265903  296642 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:46.265972  296642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:46.283075  296642 system_svc.go:56] duration metric: took 17.161056ms WaitForService to wait for kubelet
	I1123 09:57:46.283105  296642 kubeadm.go:587] duration metric: took 16.70872571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:46.283128  296642 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:46.491444  296642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:46.491473  296642 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:46.491486  296642 node_conditions.go:105] duration metric: took 208.353263ms to run NodePressure ...
	I1123 09:57:46.491509  296642 start.go:242] waiting for startup goroutines ...
	I1123 09:57:46.491520  296642 start.go:247] waiting for cluster config update ...
	I1123 09:57:46.491533  296642 start.go:256] writing updated cluster config ...
	I1123 09:57:46.491804  296642 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:46.498152  296642 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:46.503240  296642 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.508998  296642 pod_ready.go:94] pod "coredns-66bc5c9577-sx25q" is "Ready"
	I1123 09:57:46.509028  296642 pod_ready.go:86] duration metric: took 5.757344ms for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.512072  296642 pod_ready.go:83] waiting for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.517750  296642 pod_ready.go:94] pod "etcd-no-preload-309734" is "Ready"
	I1123 09:57:46.517777  296642 pod_ready.go:86] duration metric: took 5.673234ms for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.520446  296642 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.525480  296642 pod_ready.go:94] pod "kube-apiserver-no-preload-309734" is "Ready"
	I1123 09:57:46.525513  296642 pod_ready.go:86] duration metric: took 5.036877ms for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.528196  296642 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.902790  296642 pod_ready.go:94] pod "kube-controller-manager-no-preload-309734" is "Ready"
	I1123 09:57:46.902815  296642 pod_ready.go:86] duration metric: took 374.588413ms for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.104263  296642 pod_ready.go:83] waiting for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.504876  296642 pod_ready.go:94] pod "kube-proxy-jpvhc" is "Ready"
	I1123 09:57:47.504999  296642 pod_ready.go:86] duration metric: took 400.696383ms for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.706275  296642 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104684  296642 pod_ready.go:94] pod "kube-scheduler-no-preload-309734" is "Ready"
	I1123 09:57:48.104720  296642 pod_ready.go:86] duration metric: took 398.41369ms for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104739  296642 pod_ready.go:40] duration metric: took 1.606531718s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:48.181507  296642 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:48.183959  296642 out.go:179] * Done! kubectl is now configured to use "no-preload-309734" cluster and "default" namespace by default
	I1123 09:57:46.740944  311138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.699532205s)
	I1123 09:57:46.741010  311138 kic.go:203] duration metric: took 4.699734046s to extract preloaded images to volume ...
	W1123 09:57:46.741179  311138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:57:46.741234  311138 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:57:46.741304  311138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:57:46.807009  311138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-696492 --name default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --network default-k8s-diff-port-696492 --ip 192.168.85.2 --volume default-k8s-diff-port-696492:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:57:47.199589  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Running}}
	I1123 09:57:47.220655  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.242623  311138 cli_runner.go:164] Run: docker exec default-k8s-diff-port-696492 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:57:47.295743  311138 oci.go:144] the created container "default-k8s-diff-port-696492" has a running status.
	I1123 09:57:47.295783  311138 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa...
	I1123 09:57:47.562280  311138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:57:47.611801  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.650055  311138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:57:47.650078  311138 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-696492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:57:47.733580  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.763876  311138 machine.go:94] provisionDockerMachine start ...
	I1123 09:57:47.763997  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.798484  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.798947  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.798969  311138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:57:47.966787  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:47.966822  311138 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-696492"
	I1123 09:57:47.966888  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.993804  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.994099  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.994117  311138 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-696492 && echo "default-k8s-diff-port-696492" | sudo tee /etc/hostname
	I1123 09:57:48.174661  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:48.174752  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.203529  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:48.203843  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:48.203881  311138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-696492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-696492/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-696492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:57:48.379959  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:57:48.380002  311138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3552/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3552/.minikube}
	I1123 09:57:48.380096  311138 ubuntu.go:190] setting up certificates
	I1123 09:57:48.380127  311138 provision.go:84] configureAuth start
	I1123 09:57:48.380222  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.421922  311138 provision.go:143] copyHostCerts
	I1123 09:57:48.422045  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem, removing ...
	I1123 09:57:48.422074  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem
	I1123 09:57:48.422196  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem (1679 bytes)
	I1123 09:57:48.422353  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem, removing ...
	I1123 09:57:48.422365  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem
	I1123 09:57:48.422399  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem (1082 bytes)
	I1123 09:57:48.422467  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem, removing ...
	I1123 09:57:48.422523  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem
	I1123 09:57:48.422566  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem (1123 bytes)
	I1123 09:57:48.422642  311138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-696492 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-696492 localhost minikube]
	I1123 09:57:48.539621  311138 provision.go:177] copyRemoteCerts
	I1123 09:57:48.539708  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:57:48.539762  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.564284  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.677154  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:57:48.704807  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 09:57:48.730566  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:57:48.755362  311138 provision.go:87] duration metric: took 375.193527ms to configureAuth
	I1123 09:57:48.755396  311138 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:57:48.755732  311138 config.go:182] Loaded profile config "default-k8s-diff-port-696492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:48.755752  311138 machine.go:97] duration metric: took 991.839554ms to provisionDockerMachine
	I1123 09:57:48.755762  311138 client.go:176] duration metric: took 7.329805852s to LocalClient.Create
	I1123 09:57:48.755786  311138 start.go:167] duration metric: took 7.329894759s to libmachine.API.Create "default-k8s-diff-port-696492"
	I1123 09:57:48.755799  311138 start.go:293] postStartSetup for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:48.755811  311138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:57:48.755868  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:57:48.755919  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.784317  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.901734  311138 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:57:48.906292  311138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:57:48.906325  311138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:57:48.906355  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/addons for local assets ...
	I1123 09:57:48.906577  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/files for local assets ...
	I1123 09:57:48.906715  311138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem -> 71092.pem in /etc/ssl/certs
	I1123 09:57:48.906835  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:57:48.917431  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:57:48.947477  311138 start.go:296] duration metric: took 191.661634ms for postStartSetup
	I1123 09:57:48.947957  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.973141  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:48.973692  311138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:57:48.973751  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.996029  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.106682  311138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:57:49.112230  311138 start.go:128] duration metric: took 7.689569326s to createHost
	I1123 09:57:49.112259  311138 start.go:83] releasing machines lock for "default-k8s-diff-port-696492", held for 7.689795634s
	I1123 09:57:49.112351  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:49.135976  311138 ssh_runner.go:195] Run: cat /version.json
	I1123 09:57:49.136033  311138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:57:49.136042  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.136113  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.160077  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.161278  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.264125  311138 ssh_runner.go:195] Run: systemctl --version
	I1123 09:57:49.329282  311138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:57:49.335197  311138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:57:49.335268  311138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:57:49.366357  311138 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:57:49.366380  311138 start.go:496] detecting cgroup driver to use...
	I1123 09:57:49.366416  311138 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:57:49.366470  311138 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 09:57:49.383235  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 09:57:49.399768  311138 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:57:49.399842  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:57:49.420125  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:57:49.442300  311138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:57:49.541498  311138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:57:49.659194  311138 docker.go:234] disabling docker service ...
	I1123 09:57:49.659272  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:57:49.682070  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:57:49.698015  311138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:57:49.798105  311138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:57:49.894575  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:57:49.911733  311138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:57:49.931314  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 09:57:49.945424  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 09:57:49.956889  311138 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 09:57:49.956953  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 09:57:49.967923  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:49.979575  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 09:57:49.991202  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:50.002918  311138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:57:50.015086  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 09:57:50.027588  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 09:57:50.038500  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 09:57:50.050508  311138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:57:50.060907  311138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:57:50.069882  311138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:57:50.169936  311138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 09:57:50.287676  311138 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 09:57:50.287747  311138 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 09:57:50.292388  311138 start.go:564] Will wait 60s for crictl version
	I1123 09:57:50.292450  311138 ssh_runner.go:195] Run: which crictl
	I1123 09:57:50.296873  311138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:57:50.325533  311138 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 09:57:50.325605  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.350974  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.381808  311138 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 09:57:50.383456  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:50.407801  311138 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 09:57:50.413000  311138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:57:50.425563  311138 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:57:50.425681  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:50.425728  311138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:57:50.458513  311138 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:57:50.458540  311138 containerd.go:534] Images already preloaded, skipping extraction
	I1123 09:57:50.458578  311138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:57:50.490466  311138 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:57:50.490488  311138 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:57:50.490496  311138 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 09:57:50.490604  311138 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-696492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:57:50.490683  311138 ssh_runner.go:195] Run: sudo crictl info
	I1123 09:57:50.519013  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:50.519047  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:50.519066  311138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:57:50.519093  311138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-696492 NodeName:default-k8s-diff-port-696492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:57:50.519249  311138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-696492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:57:50.519326  311138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:57:50.531186  311138 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:57:50.531258  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:57:50.540764  311138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 09:57:50.556738  311138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:57:50.577978  311138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1123 09:57:50.594432  311138 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:57:50.598984  311138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:57:50.611087  311138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:57:50.713969  311138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:57:50.738999  311138 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492 for IP: 192.168.85.2
	I1123 09:57:50.739022  311138 certs.go:195] generating shared ca certs ...
	I1123 09:57:50.739042  311138 certs.go:227] acquiring lock for ca certs: {Name:mkf0ec2efb8866dd9406da39e0a5f5dc931fd377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.739203  311138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key
	I1123 09:57:50.739256  311138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key
	I1123 09:57:50.739271  311138 certs.go:257] generating profile certs ...
	I1123 09:57:50.739364  311138 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key
	I1123 09:57:50.739382  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt with IP's: []
	I1123 09:57:50.902937  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt ...
	I1123 09:57:50.902975  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt: {Name:mk1be782fc73373be310b15837c277ec6685e2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.903176  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key ...
	I1123 09:57:50.903195  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key: {Name:mk6db5327a581ec783720f15c44b3730584ff35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.903326  311138 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1
	I1123 09:57:50.903367  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 09:57:51.007041  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 ...
	I1123 09:57:51.007079  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1: {Name:mk4d1a5fa60f123a8319b137c9ec74f1fa189955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.007285  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1 ...
	I1123 09:57:51.007298  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1: {Name:mkdd2b300e22459c4a8968bc56aef3e76c8f86f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.007514  311138 certs.go:382] copying /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt
	I1123 09:57:51.007636  311138 certs.go:386] copying /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key
	I1123 09:57:51.007701  311138 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key
	I1123 09:57:51.007715  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt with IP's: []
	I1123 09:57:51.045607  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt ...
	I1123 09:57:51.045642  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt: {Name:mkb29252ee6ba2f8bc8fb350259fbc7d524b689b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.045864  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key ...
	I1123 09:57:51.045887  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key: {Name:mk39c6b0c10f773b67a0a811d41c76d128d66647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.046116  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem (1338 bytes)
	W1123 09:57:51.046161  311138 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109_empty.pem, impossibly tiny 0 bytes
	I1123 09:57:51.046173  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:57:51.046197  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:57:51.046222  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:57:51.046245  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem (1679 bytes)
	I1123 09:57:51.046287  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:57:51.047046  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:57:51.071141  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:57:51.092546  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:57:51.116776  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:57:51.139235  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:57:51.160968  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:57:51.181315  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:57:51.203122  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:57:51.226401  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:57:51.252100  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem --> /usr/share/ca-certificates/7109.pem (1338 bytes)
	I1123 09:57:51.274287  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /usr/share/ca-certificates/71092.pem (1708 bytes)
	I1123 09:57:51.297105  311138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:57:51.313841  311138 ssh_runner.go:195] Run: openssl version
	I1123 09:57:51.322431  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:57:51.335037  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.339776  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.339848  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.383842  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:57:51.395820  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7109.pem && ln -fs /usr/share/ca-certificates/7109.pem /etc/ssl/certs/7109.pem"
	I1123 09:57:51.406811  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.411731  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:26 /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.411802  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.456262  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7109.pem /etc/ssl/certs/51391683.0"
	I1123 09:57:51.467466  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71092.pem && ln -fs /usr/share/ca-certificates/71092.pem /etc/ssl/certs/71092.pem"
	I1123 09:57:51.479299  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.484434  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:26 /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.484508  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.525183  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71092.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:57:51.535904  311138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:57:51.540741  311138 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:57:51.540806  311138 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:51.540889  311138 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 09:57:51.540937  311138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:57:51.573411  311138 cri.go:89] found id: ""
	I1123 09:57:51.573483  311138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:57:51.583208  311138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:57:51.592170  311138 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:57:51.592237  311138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:57:51.601224  311138 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:57:51.601243  311138 kubeadm.go:158] found existing configuration files:
	
	I1123 09:57:51.601292  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 09:57:51.610806  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:57:51.610871  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:57:51.619590  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 09:57:51.628676  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:57:51.628753  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:57:51.638382  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 09:57:51.648357  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:57:51.648452  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:57:51.657606  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 09:57:51.667094  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:57:51.667160  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:57:51.677124  311138 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:57:51.753028  311138 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 09:57:51.832851  311138 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	83b803e375a11       56cc512116c8f       6 seconds ago       Running             busybox                   0                   08fea159e192e       busybox                                     default
	6d27e56eea5cb       52546a367cc9e       13 seconds ago      Running             coredns                   0                   c35b50f299022       coredns-66bc5c9577-sx25q                    kube-system
	103095b7989ee       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   f875236ef29c4       storage-provisioner                         kube-system
	5c49f9103fd4c       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   52e89975c29a3       kindnet-d6zbp                               kube-system
	b1f2f40f83352       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   3c931f4ebe3b6       kube-proxy-jpvhc                            kube-system
	d13615209a18d       c80c8dbafe7dd       39 seconds ago      Running             kube-controller-manager   0                   9b3682a73d7c9       kube-controller-manager-no-preload-309734   kube-system
	b7a0f8d20ac46       c3994bc696102       39 seconds ago      Running             kube-apiserver            0                   af6630aa22518       kube-apiserver-no-preload-309734            kube-system
	d3705422907a4       7dd6aaa1717ab       39 seconds ago      Running             kube-scheduler            0                   001d285d1626c       kube-scheduler-no-preload-309734            kube-system
	a81288f6ae55b       5f1f5298c888d       39 seconds ago      Running             etcd                      0                   7c2a74ce9f993       etcd-no-preload-309734                      kube-system
	
	
	==> containerd <==
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.437664858Z" level=info msg="CreateContainer within sandbox \"f875236ef29c4dcaca84613fe0d3342cd15f562c1b6c450727f815a45d23abec\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8\""
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.438966110Z" level=info msg="StartContainer for \"103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8\""
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.441596627Z" level=info msg="Container 6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.442688190Z" level=info msg="connecting to shim 103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8" address="unix:///run/containerd/s/e3c90dc88ed2011a17e06013960c4ff36dcd5e5c4c0b472e967ab7c541e7cc59" protocol=ttrpc version=3
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.452070718Z" level=info msg="CreateContainer within sandbox \"c35b50f29902262db0930fff1232f8a0750b061fc8c644ff40065e2189b7a0c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7\""
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.452928128Z" level=info msg="StartContainer for \"6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7\""
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.454511893Z" level=info msg="connecting to shim 6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7" address="unix:///run/containerd/s/57a86b3d1f07aaee01b72ba5832cca7be61629982786bb2793fc5b74a12bbf4c" protocol=ttrpc version=3
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.687659950Z" level=info msg="StartContainer for \"6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7\" returns successfully"
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.690525274Z" level=info msg="StartContainer for \"103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8\" returns successfully"
	Nov 23 09:57:48 no-preload-309734 containerd[656]: time="2025-11-23T09:57:48.717380764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:8d46a619-a382-4103-900c-1ce2911f6fb9,Namespace:default,Attempt:0,}"
	Nov 23 09:57:48 no-preload-309734 containerd[656]: time="2025-11-23T09:57:48.775104419Z" level=info msg="connecting to shim 08fea159e192e081b068d0606fe4a52cb2c890cdcf80e6514527a0c123f207a8" address="unix:///run/containerd/s/5259430685db90287109d0f7c347cef09803959202e1e931a6a2771afa7e7192" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:57:48 no-preload-309734 containerd[656]: time="2025-11-23T09:57:48.856379941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:8d46a619-a382-4103-900c-1ce2911f6fb9,Namespace:default,Attempt:0,} returns sandbox id \"08fea159e192e081b068d0606fe4a52cb2c890cdcf80e6514527a0c123f207a8\""
	Nov 23 09:57:48 no-preload-309734 containerd[656]: time="2025-11-23T09:57:48.858825417Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.962286000Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.963095464Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.964577847Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.966619514Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.967057622Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.108184767s"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.967095781Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.973882386Z" level=info msg="CreateContainer within sandbox \"08fea159e192e081b068d0606fe4a52cb2c890cdcf80e6514527a0c123f207a8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.985011283Z" level=info msg="Container 83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.992752698Z" level=info msg="CreateContainer within sandbox \"08fea159e192e081b068d0606fe4a52cb2c890cdcf80e6514527a0c123f207a8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e\""
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.993498398Z" level=info msg="StartContainer for \"83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e\""
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.994585188Z" level=info msg="connecting to shim 83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e" address="unix:///run/containerd/s/5259430685db90287109d0f7c347cef09803959202e1e931a6a2771afa7e7192" protocol=ttrpc version=3
	Nov 23 09:57:51 no-preload-309734 containerd[656]: time="2025-11-23T09:57:51.056866970Z" level=info msg="StartContainer for \"83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e\" returns successfully"
	
	
	==> coredns [6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35403 - 63133 "HINFO IN 8016908280927694689.584937637230355027. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.034548045s
	
	
	==> describe nodes <==
	Name:               no-preload-309734
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-309734
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-309734
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_57_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:57:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-309734
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:57:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:57:55 +0000   Sun, 23 Nov 2025 09:57:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:57:55 +0000   Sun, 23 Nov 2025 09:57:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:57:55 +0000   Sun, 23 Nov 2025 09:57:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:57:55 +0000   Sun, 23 Nov 2025 09:57:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-309734
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3f1b400d-a81e-4472-94b0-c48cd427d30f
	  Boot ID:                    e4c4d39b-bebd-4037-9237-26b945dbe084
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-sx25q                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-309734                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-d6zbp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-309734             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-309734    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-jpvhc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-309734             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node no-preload-309734 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node no-preload-309734 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node no-preload-309734 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node no-preload-309734 event: Registered Node no-preload-309734 in Controller
	  Normal  NodeReady                14s   kubelet          Node no-preload-309734 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.288463] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[Nov23 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.195562] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +5.912917] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 c0 1c 98 33 a9 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.002091] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 47 bd bf 96 57 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[  +4.460318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 85 b9 91 f8 a4 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +2.904694] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	[Nov23 09:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 48 bf 8b d1 fc 08 06
	[  +0.000931] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	
	
	==> etcd [a81288f6ae55b6a042b8f67e3e9eedfe1c61dd371e39e06133e14aee6f968eb3] <==
	{"level":"info","ts":"2025-11-23T09:57:45.604990Z","caller":"traceutil/trace.go:172","msg":"trace[729928754] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"254.824024ms","start":"2025-11-23T09:57:45.350149Z","end":"2025-11-23T09:57:45.604973Z","steps":["trace[729928754] 'process raft request'  (duration: 254.674341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:45.605011Z","caller":"traceutil/trace.go:172","msg":"trace[1938700971] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:411; }","duration":"140.550391ms","start":"2025-11-23T09:57:45.464447Z","end":"2025-11-23T09:57:45.604997Z","steps":["trace[1938700971] 'agreement among raft nodes before linearized reading'  (duration: 140.381152ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:45.941511Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.788183ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766362597583908 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" mod_revision:297 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" value_size:1199 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:57:45.941745Z","caller":"traceutil/trace.go:172","msg":"trace[1743399975] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"326.123128ms","start":"2025-11-23T09:57:45.615609Z","end":"2025-11-23T09:57:45.941732Z","steps":["trace[1743399975] 'process raft request'  (duration: 326.076119ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:45.941842Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.615584Z","time spent":"326.215431ms","remote":"127.0.0.1:44416","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5713,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-sx25q\" mod_revision:412 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-sx25q\" value_size:5654 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-sx25q\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:45.941864Z","caller":"traceutil/trace.go:172","msg":"trace[1936674735] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"332.279773ms","start":"2025-11-23T09:57:45.609561Z","end":"2025-11-23T09:57:45.941841Z","steps":["trace[1936674735] 'process raft request'  (duration: 79.578516ms)","trace[1936674735] 'compare'  (duration: 251.670387ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:45.942006Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.609541Z","time spent":"332.400937ms","remote":"127.0.0.1:44736","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1258,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" mod_revision:297 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" value_size:1199 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:45.941918Z","caller":"traceutil/trace.go:172","msg":"trace[8509950] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"332.190987ms","start":"2025-11-23T09:57:45.609715Z","end":"2025-11-23T09:57:45.941906Z","steps":["trace[8509950] 'process raft request'  (duration: 331.895647ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:45.942189Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.609702Z","time spent":"332.438504ms","remote":"127.0.0.1:44318","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":891,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:321 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:834 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"warn","ts":"2025-11-23T09:57:46.262257Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.656054ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766362597583913 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:414 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:834 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:57:46.262452Z","caller":"traceutil/trace.go:172","msg":"trace[1069729271] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"315.994309ms","start":"2025-11-23T09:57:45.946434Z","end":"2025-11-23T09:57:46.262428Z","steps":["trace[1069729271] 'process raft request'  (duration: 165.67524ms)","trace[1069729271] 'compare'  (duration: 149.449024ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:46.262562Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.946263Z","time spent":"316.246576ms","remote":"127.0.0.1:44318","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":891,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:414 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:834 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:46.262599Z","caller":"traceutil/trace.go:172","msg":"trace[695268418] linearizableReadLoop","detail":"{readStateIndex:435; appliedIndex:432; }","duration":"108.431625ms","start":"2025-11-23T09:57:46.154153Z","end":"2025-11-23T09:57:46.262584Z","steps":["trace[695268418] 'read index received'  (duration: 51.519µs)","trace[695268418] 'applied index is now lower than readState.Index'  (duration: 108.379558ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:46.262799Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.64839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:46.262885Z","caller":"traceutil/trace.go:172","msg":"trace[310550640] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:419; }","duration":"108.737789ms","start":"2025-11-23T09:57:46.154136Z","end":"2025-11-23T09:57:46.262874Z","steps":["trace[310550640] 'agreement among raft nodes before linearized reading'  (duration: 108.564956ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:46.262817Z","caller":"traceutil/trace.go:172","msg":"trace[1142413134] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"311.542847ms","start":"2025-11-23T09:57:45.951257Z","end":"2025-11-23T09:57:46.262800Z","steps":["trace[1142413134] 'process raft request'  (duration: 311.251022ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:46.263729Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.951238Z","time spent":"312.436616ms","remote":"127.0.0.1:44416","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4275,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:402 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:4221 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:46.262828Z","caller":"traceutil/trace.go:172","msg":"trace[590993168] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"313.957864ms","start":"2025-11-23T09:57:45.948856Z","end":"2025-11-23T09:57:46.262814Z","steps":["trace[590993168] 'process raft request'  (duration: 313.554765ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:46.263848Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.948835Z","time spent":"314.949298ms","remote":"127.0.0.1:45572","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4134,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:363 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:4074 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:46.434432Z","caller":"traceutil/trace.go:172","msg":"trace[1042096307] linearizableReadLoop","detail":"{readStateIndex:435; appliedIndex:435; }","duration":"154.238935ms","start":"2025-11-23T09:57:46.280166Z","end":"2025-11-23T09:57:46.434405Z","steps":["trace[1042096307] 'read index received'  (duration: 154.154392ms)","trace[1042096307] 'applied index is now lower than readState.Index'  (duration: 79.147µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:46.489436Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"209.253474ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:46.489506Z","caller":"traceutil/trace.go:172","msg":"trace[1882503074] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:419; }","duration":"209.334681ms","start":"2025-11-23T09:57:46.280154Z","end":"2025-11-23T09:57:46.489489Z","steps":["trace[1882503074] 'agreement among raft nodes before linearized reading'  (duration: 154.347258ms)","trace[1882503074] 'range keys from in-memory index tree'  (duration: 54.884092ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:46.489532Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.811021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:46.489569Z","caller":"traceutil/trace.go:172","msg":"trace[2032908186] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:420; }","duration":"204.853126ms","start":"2025-11-23T09:57:46.284706Z","end":"2025-11-23T09:57:46.489559Z","steps":["trace[2032908186] 'agreement among raft nodes before linearized reading'  (duration: 204.788988ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:46.489526Z","caller":"traceutil/trace.go:172","msg":"trace[506296957] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"219.451231ms","start":"2025-11-23T09:57:46.270057Z","end":"2025-11-23T09:57:46.489509Z","steps":["trace[506296957] 'process raft request'  (duration: 164.324633ms)","trace[506296957] 'compare'  (duration: 54.991634ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:57:57 up 40 min,  0 user,  load average: 4.93, 4.11, 2.62
	Linux no-preload-309734 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c49f9103fd4c18deec14e3758e958db34380a181d3ea11344ed107acc94faab] <==
	I1123 09:57:33.661564       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:57:33.661882       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 09:57:33.662065       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:57:33.662081       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:57:33.662111       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:57:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:57:33.914181       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:57:33.914227       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:57:33.914238       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:57:33.914423       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:57:34.259526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:57:34.259590       1 metrics.go:72] Registering metrics
	I1123 09:57:34.259697       1 controller.go:711] "Syncing nftables rules"
	I1123 09:57:43.914914       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:57:43.914995       1 main.go:301] handling current node
	I1123 09:57:53.910821       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:57:53.910866       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b7a0f8d20ac463989e63a3565c249816e2e20c9067287e9f2b8c3db6cfb05aab] <==
	E1123 09:57:21.143080       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 09:57:21.255182       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:21.255412       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:57:21.279519       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:57:21.279712       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:57:21.279867       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:21.348092       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:57:22.025838       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:57:22.034539       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:57:22.034566       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:57:22.997425       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:57:23.053253       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:57:23.232998       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:57:23.242170       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 09:57:23.243794       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:57:23.250061       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:57:23.336386       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:57:24.347834       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:57:24.360466       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:57:24.368827       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:57:29.096206       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:29.104211       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:29.392865       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:57:29.438704       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:57:56.530610       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:56958: use of closed network connection
	
	
	==> kube-controller-manager [d13615209a18dd7b287968a7f98989bb3ce87db942b906988e39fde11c294cce] <==
	I1123 09:57:28.346104       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:57:28.346123       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:57:28.358516       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-309734" podCIDRs=["10.244.0.0/24"]
	I1123 09:57:28.370402       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:57:28.370448       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:57:28.376787       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:57:28.384459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:57:28.384606       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:57:28.384970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:57:28.385801       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:57:28.385822       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:57:28.385853       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:57:28.385872       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:57:28.385948       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:57:28.386483       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:57:28.387261       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:57:28.387290       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:57:28.387296       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:57:28.387373       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:57:28.387426       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:57:28.387764       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:57:28.388909       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:57:28.390893       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:57:28.398493       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:57:48.339554       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b1f2f40f833522a80b40c076eb2228ca8ab64af5ae29ec412679554033ccf342] <==
	I1123 09:57:30.225772       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:57:30.326216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:57:30.428019       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:57:30.428069       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 09:57:30.428155       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:57:30.470994       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:57:30.471157       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:57:30.480600       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:57:30.481164       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:57:30.481259       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:57:30.483774       1 config.go:309] "Starting node config controller"
	I1123 09:57:30.483932       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:57:30.483965       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:57:30.483886       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:57:30.484009       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:57:30.483832       1 config.go:200] "Starting service config controller"
	I1123 09:57:30.485261       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:57:30.483852       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:57:30.485625       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:57:30.584426       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:57:30.585604       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:57:30.585724       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d3705422907a474de42f4b2ba1fea7490c10e3083855a79fad006ba545fab905] <==
	E1123 09:57:21.323927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:57:21.324568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:57:21.324655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:57:21.324773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:57:21.324762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:57:21.324925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:57:21.325786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:57:21.325813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:57:22.181484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:57:22.216690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:57:22.262145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:57:22.281643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:57:22.288228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:57:22.289460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:57:22.306787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:57:22.453485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:57:22.463201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:57:22.504380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:57:22.518073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:57:22.533460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:57:22.552683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:57:22.587917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:57:22.601681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:57:22.727221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1123 09:57:25.613253       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:57:25 no-preload-309734 kubelet[2135]: I1123 09:57:25.301923    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-309734" podStartSLOduration=1.3019004889999999 podStartE2EDuration="1.301900489s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.301870004 +0000 UTC m=+1.184556442" watchObservedRunningTime="2025-11-23 09:57:25.301900489 +0000 UTC m=+1.184586938"
	Nov 23 09:57:25 no-preload-309734 kubelet[2135]: I1123 09:57:25.343592    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-309734" podStartSLOduration=3.343566116 podStartE2EDuration="3.343566116s" podCreationTimestamp="2025-11-23 09:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.322167734 +0000 UTC m=+1.204854180" watchObservedRunningTime="2025-11-23 09:57:25.343566116 +0000 UTC m=+1.226252553"
	Nov 23 09:57:25 no-preload-309734 kubelet[2135]: I1123 09:57:25.362057    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-309734" podStartSLOduration=1.3620392049999999 podStartE2EDuration="1.362039205s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.344940934 +0000 UTC m=+1.227627370" watchObservedRunningTime="2025-11-23 09:57:25.362039205 +0000 UTC m=+1.244725642"
	Nov 23 09:57:25 no-preload-309734 kubelet[2135]: I1123 09:57:25.362190    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-309734" podStartSLOduration=1.362179992 podStartE2EDuration="1.362179992s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.361953507 +0000 UTC m=+1.244639947" watchObservedRunningTime="2025-11-23 09:57:25.362179992 +0000 UTC m=+1.244866430"
	Nov 23 09:57:28 no-preload-309734 kubelet[2135]: I1123 09:57:28.409253    2135 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:57:28 no-preload-309734 kubelet[2135]: I1123 09:57:28.410053    2135 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.548826    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1c56dde-7af0-49ca-a982-04ae56add5f9-xtables-lock\") pod \"kindnet-d6zbp\" (UID: \"d1c56dde-7af0-49ca-a982-04ae56add5f9\") " pod="kube-system/kindnet-d6zbp"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.548904    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1c56dde-7af0-49ca-a982-04ae56add5f9-lib-modules\") pod \"kindnet-d6zbp\" (UID: \"d1c56dde-7af0-49ca-a982-04ae56add5f9\") " pod="kube-system/kindnet-d6zbp"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.548935    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpq2v\" (UniqueName: \"kubernetes.io/projected/d1c56dde-7af0-49ca-a982-04ae56add5f9-kube-api-access-qpq2v\") pod \"kindnet-d6zbp\" (UID: \"d1c56dde-7af0-49ca-a982-04ae56add5f9\") " pod="kube-system/kindnet-d6zbp"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549020    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb0ab966-23fc-429f-bcfe-eb5726b865be-kube-proxy\") pod \"kube-proxy-jpvhc\" (UID: \"eb0ab966-23fc-429f-bcfe-eb5726b865be\") " pod="kube-system/kube-proxy-jpvhc"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549055    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb0ab966-23fc-429f-bcfe-eb5726b865be-lib-modules\") pod \"kube-proxy-jpvhc\" (UID: \"eb0ab966-23fc-429f-bcfe-eb5726b865be\") " pod="kube-system/kube-proxy-jpvhc"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549078    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvtp\" (UniqueName: \"kubernetes.io/projected/eb0ab966-23fc-429f-bcfe-eb5726b865be-kube-api-access-zxvtp\") pod \"kube-proxy-jpvhc\" (UID: \"eb0ab966-23fc-429f-bcfe-eb5726b865be\") " pod="kube-system/kube-proxy-jpvhc"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549103    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d1c56dde-7af0-49ca-a982-04ae56add5f9-cni-cfg\") pod \"kindnet-d6zbp\" (UID: \"d1c56dde-7af0-49ca-a982-04ae56add5f9\") " pod="kube-system/kindnet-d6zbp"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549128    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb0ab966-23fc-429f-bcfe-eb5726b865be-xtables-lock\") pod \"kube-proxy-jpvhc\" (UID: \"eb0ab966-23fc-429f-bcfe-eb5726b865be\") " pod="kube-system/kube-proxy-jpvhc"
	Nov 23 09:57:32 no-preload-309734 kubelet[2135]: I1123 09:57:32.926726    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jpvhc" podStartSLOduration=3.926700801 podStartE2EDuration="3.926700801s" podCreationTimestamp="2025-11-23 09:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:30.324860956 +0000 UTC m=+6.207547396" watchObservedRunningTime="2025-11-23 09:57:32.926700801 +0000 UTC m=+8.809387239"
	Nov 23 09:57:37 no-preload-309734 kubelet[2135]: I1123 09:57:37.321200    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-d6zbp" podStartSLOduration=5.317665175 podStartE2EDuration="8.321177483s" podCreationTimestamp="2025-11-23 09:57:29 +0000 UTC" firstStartedPulling="2025-11-23 09:57:30.284577539 +0000 UTC m=+6.167263969" lastFinishedPulling="2025-11-23 09:57:33.288089848 +0000 UTC m=+9.170776277" observedRunningTime="2025-11-23 09:57:34.337086182 +0000 UTC m=+10.219772617" watchObservedRunningTime="2025-11-23 09:57:37.321177483 +0000 UTC m=+13.203863919"
	Nov 23 09:57:43 no-preload-309734 kubelet[2135]: I1123 09:57:43.948176    2135 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:57:44 no-preload-309734 kubelet[2135]: I1123 09:57:44.063563    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b1352952-5fff-47aa-af05-dd6b2078fa39-tmp\") pod \"storage-provisioner\" (UID: \"b1352952-5fff-47aa-af05-dd6b2078fa39\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:44 no-preload-309734 kubelet[2135]: I1123 09:57:44.063643    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50adb46a-6c29-465a-adba-f806eeef81aa-config-volume\") pod \"coredns-66bc5c9577-sx25q\" (UID: \"50adb46a-6c29-465a-adba-f806eeef81aa\") " pod="kube-system/coredns-66bc5c9577-sx25q"
	Nov 23 09:57:44 no-preload-309734 kubelet[2135]: I1123 09:57:44.063673    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brm7p\" (UniqueName: \"kubernetes.io/projected/50adb46a-6c29-465a-adba-f806eeef81aa-kube-api-access-brm7p\") pod \"coredns-66bc5c9577-sx25q\" (UID: \"50adb46a-6c29-465a-adba-f806eeef81aa\") " pod="kube-system/coredns-66bc5c9577-sx25q"
	Nov 23 09:57:44 no-preload-309734 kubelet[2135]: I1123 09:57:44.063774    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9sgg\" (UniqueName: \"kubernetes.io/projected/b1352952-5fff-47aa-af05-dd6b2078fa39-kube-api-access-t9sgg\") pod \"storage-provisioner\" (UID: \"b1352952-5fff-47aa-af05-dd6b2078fa39\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:45 no-preload-309734 kubelet[2135]: I1123 09:57:45.607001    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sx25q" podStartSLOduration=16.606976312 podStartE2EDuration="16.606976312s" podCreationTimestamp="2025-11-23 09:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:45.606832745 +0000 UTC m=+21.489519183" watchObservedRunningTime="2025-11-23 09:57:45.606976312 +0000 UTC m=+21.489662748"
	Nov 23 09:57:48 no-preload-309734 kubelet[2135]: I1123 09:57:48.393282    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=18.393252975 podStartE2EDuration="18.393252975s" podCreationTimestamp="2025-11-23 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:46.264860218 +0000 UTC m=+22.147546655" watchObservedRunningTime="2025-11-23 09:57:48.393252975 +0000 UTC m=+24.275939412"
	Nov 23 09:57:48 no-preload-309734 kubelet[2135]: I1123 09:57:48.499644    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg7d6\" (UniqueName: \"kubernetes.io/projected/8d46a619-a382-4103-900c-1ce2911f6fb9-kube-api-access-lg7d6\") pod \"busybox\" (UID: \"8d46a619-a382-4103-900c-1ce2911f6fb9\") " pod="default/busybox"
	Nov 23 09:57:51 no-preload-309734 kubelet[2135]: I1123 09:57:51.373809    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.264225157 podStartE2EDuration="3.373786874s" podCreationTimestamp="2025-11-23 09:57:48 +0000 UTC" firstStartedPulling="2025-11-23 09:57:48.85844247 +0000 UTC m=+24.741128886" lastFinishedPulling="2025-11-23 09:57:50.968004175 +0000 UTC m=+26.850690603" observedRunningTime="2025-11-23 09:57:51.373424002 +0000 UTC m=+27.256110440" watchObservedRunningTime="2025-11-23 09:57:51.373786874 +0000 UTC m=+27.256473311"
	
	
	==> storage-provisioner [103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8] <==
	I1123 09:57:44.548631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:57:44.557824       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:57:44.557879       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:57:44.562111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:44.686927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:57:44.687140       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:57:44.687422       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d614ab-3709-4e6f-ae73-033d177de3d1", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-309734_1ad0791b-f836-4dd5-a010-1f2702a54569 became leader
	I1123 09:57:44.687583       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-309734_1ad0791b-f836-4dd5-a010-1f2702a54569!
	W1123 09:57:44.690282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:44.749212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:57:44.788474       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-309734_1ad0791b-f836-4dd5-a010-1f2702a54569!
	W1123 09:57:46.753163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:46.761100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:48.765258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:48.773161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:50.776936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:50.781283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:52.785706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:52.791036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:54.795558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:54.801510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:56.805598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:56.810855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-309734 -n no-preload-309734
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-309734 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-309734
helpers_test.go:243: (dbg) docker inspect no-preload-309734:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9",
	        "Created": "2025-11-23T09:56:51.96900087Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:56:52.019118184Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9/hosts",
	        "LogPath": "/var/lib/docker/containers/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9/124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9-json.log",
	        "Name": "/no-preload-309734",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-309734:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-309734",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "124fd8be1e7fbebe0bb227a1877e558f5dbb7eac6f7735f11d3a1b971cf007c9",
	                "LowerDir": "/var/lib/docker/overlay2/f3ff7e283981f4ebe6f99aedbe0f6c8c431e57bbff30ff8b7adc33fdfcb8e86f-init/diff:/var/lib/docker/overlay2/c80a0dfdb81b7753b0a82e2bc6458805cbbad0a9ce5819c63e1d9b7b71ba226c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f3ff7e283981f4ebe6f99aedbe0f6c8c431e57bbff30ff8b7adc33fdfcb8e86f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f3ff7e283981f4ebe6f99aedbe0f6c8c431e57bbff30ff8b7adc33fdfcb8e86f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f3ff7e283981f4ebe6f99aedbe0f6c8c431e57bbff30ff8b7adc33fdfcb8e86f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-309734",
	                "Source": "/var/lib/docker/volumes/no-preload-309734/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-309734",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-309734",
	                "name.minikube.sigs.k8s.io": "no-preload-309734",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cca13c5a748b4620b9b193a9f5361761307d6436457d8fdb1ab5b5a8656d14c4",
	            "SandboxKey": "/var/run/docker/netns/cca13c5a748b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-309734": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d57442761c92d8836fe3467356c458dc3c295dcf4c4aec369e66e2eb0689f5e",
	                    "EndpointID": "6607f39c95da6d4d22fb81caf588075ebcbfbedd8774d2fa0c442ec6a9a0af2c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "42:fb:18:a2:0c:6b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-309734",
	                        "124fd8be1e7f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-309734 -n no-preload-309734
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-309734 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-309734 logs -n 25: (1.473396026s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-676928 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /var/lib/kubelet/config.yaml                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status docker --all --full --no-pager                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat docker --no-pager                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/docker/daemon.json                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo docker system info                                                                                                                                       │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl status cri-docker --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat cri-docker --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cri-dockerd --version                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status containerd --all --full --no-pager                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl cat containerd --no-pager                                                                                                                      │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /lib/systemd/system/containerd.service                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/containerd/config.toml                                                                                                                          │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo containerd config dump                                                                                                                                   │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status crio --all --full --no-pager                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat crio --no-pager                                                                                                                            │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                  │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo crio config                                                                                                                                              │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p bridge-676928                                                                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p disable-driver-mounts-178820                                                                                                                                                │ disable-driver-mounts-178820 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ start   │ -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ default-k8s-diff-port-696492 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-709593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                   │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ stop    │ -p old-k8s-version-709593 --alsologtostderr -v=3                                                                                                                               │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:57:41
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:57:41.194019  311138 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:57:41.194298  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194308  311138 out.go:374] Setting ErrFile to fd 2...
	I1123 09:57:41.194312  311138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:57:41.194606  311138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:57:41.195144  311138 out.go:368] Setting JSON to false
	I1123 09:57:41.196591  311138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2400,"bootTime":1763889461,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:57:41.196668  311138 start.go:143] virtualization: kvm guest
	I1123 09:57:41.199167  311138 out.go:179] * [default-k8s-diff-port-696492] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:57:41.201043  311138 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:57:41.201094  311138 notify.go:221] Checking for updates...
	I1123 09:57:41.204382  311138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:57:41.206017  311138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:57:41.207959  311138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:57:41.209794  311138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:57:41.211809  311138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:57:41.214009  311138 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214105  311138 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:41.214180  311138 config.go:182] Loaded profile config "old-k8s-version-709593": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 09:57:41.214271  311138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:57:41.241306  311138 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:57:41.241474  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.312013  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.299959199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.312116  311138 docker.go:319] overlay module found
	I1123 09:57:41.314243  311138 out.go:179] * Using the docker driver based on user configuration
	I1123 09:57:41.316002  311138 start.go:309] selected driver: docker
	I1123 09:57:41.316024  311138 start.go:927] validating driver "docker" against <nil>
	I1123 09:57:41.316037  311138 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:57:41.316751  311138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:57:41.385595  311138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:57:41.373759534 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:57:41.385794  311138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:57:41.386023  311138 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:41.388087  311138 out.go:179] * Using Docker driver with root privileges
	I1123 09:57:41.389651  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:41.389725  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:41.389738  311138 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:57:41.389816  311138 start.go:353] cluster config:
	{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:41.391556  311138 out.go:179] * Starting "default-k8s-diff-port-696492" primary control-plane node in "default-k8s-diff-port-696492" cluster
	I1123 09:57:41.392982  311138 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:57:41.394476  311138 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:57:41.395978  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:41.396028  311138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 09:57:41.396036  311138 cache.go:65] Caching tarball of preloaded images
	I1123 09:57:41.396075  311138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:57:41.396157  311138 preload.go:238] Found /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 09:57:41.396175  311138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:57:41.396320  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:41.396374  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json: {Name:mk3b81d8fd8561a54828649e3e510565221995b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:41.422089  311138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:57:41.422112  311138 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:57:41.422133  311138 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:57:41.422177  311138 start.go:360] acquireMachinesLock for default-k8s-diff-port-696492: {Name:mkc8ee83ed2b7a995e355ddec223dfeea233bbf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:57:41.422316  311138 start.go:364] duration metric: took 112.296µs to acquireMachinesLock for "default-k8s-diff-port-696492"
	I1123 09:57:41.422500  311138 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:57:41.422632  311138 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:57:37.251564  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	W1123 09:57:39.751746  300017 node_ready.go:57] node "embed-certs-412583" has "Ready":"False" status (will retry)
	I1123 09:57:42.255256  300017 node_ready.go:49] node "embed-certs-412583" is "Ready"
	I1123 09:57:42.255291  300017 node_ready.go:38] duration metric: took 11.507766088s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:57:42.255310  300017 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:42.255471  300017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:42.277737  300017 api_server.go:72] duration metric: took 12.028046262s to wait for apiserver process to appear ...
	I1123 09:57:42.277770  300017 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:42.277792  300017 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:57:42.285468  300017 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:57:42.287274  300017 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:42.287395  300017 api_server.go:131] duration metric: took 9.61454ms to wait for apiserver health ...
	I1123 09:57:42.287422  300017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:42.294433  300017 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:42.294478  300017 system_pods.go:61] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.294486  300017 system_pods.go:61] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.294493  300017 system_pods.go:61] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.294499  300017 system_pods.go:61] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.294505  300017 system_pods.go:61] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.294510  300017 system_pods.go:61] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.294515  300017 system_pods.go:61] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.294526  300017 system_pods.go:61] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.294539  300017 system_pods.go:74] duration metric: took 7.098728ms to wait for pod list to return data ...
	I1123 09:57:42.294549  300017 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:42.298321  300017 default_sa.go:45] found service account: "default"
	I1123 09:57:42.298368  300017 default_sa.go:55] duration metric: took 3.811774ms for default service account to be created ...
	I1123 09:57:42.298382  300017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:42.302807  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.302871  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.302887  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.302896  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.302903  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.302927  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.302937  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.302943  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.302954  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.303049  300017 retry.go:31] will retry after 268.599682ms: missing components: kube-dns
	I1123 09:57:42.577490  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.577531  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.577541  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.577550  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.577557  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.577563  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.577568  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.577573  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.577581  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.577600  300017 retry.go:31] will retry after 240.156475ms: missing components: kube-dns
	I1123 09:57:42.822131  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:42.822171  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:42.822177  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:42.822182  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:42.822186  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:42.822190  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:42.822194  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:42.822197  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:42.822202  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:42.822216  300017 retry.go:31] will retry after 383.926777ms: missing components: kube-dns
	I1123 09:57:43.211532  300017 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:43.211575  300017 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running
	I1123 09:57:43.211585  300017 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running
	I1123 09:57:43.211592  300017 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running
	I1123 09:57:43.211600  300017 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running
	I1123 09:57:43.211608  300017 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running
	I1123 09:57:43.211624  300017 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:57:43.211635  300017 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running
	I1123 09:57:43.211640  300017 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running
	I1123 09:57:43.211650  300017 system_pods.go:126] duration metric: took 913.260942ms to wait for k8s-apps to be running ...
	I1123 09:57:43.211661  300017 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:43.211722  300017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:43.226055  300017 system_svc.go:56] duration metric: took 14.383207ms WaitForService to wait for kubelet
	I1123 09:57:43.226087  300017 kubeadm.go:587] duration metric: took 12.976401428s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:43.226108  300017 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:43.229492  300017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:43.229524  300017 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:43.229547  300017 node_conditions.go:105] duration metric: took 3.432669ms to run NodePressure ...
	I1123 09:57:43.229560  300017 start.go:242] waiting for startup goroutines ...
	I1123 09:57:43.229570  300017 start.go:247] waiting for cluster config update ...
	I1123 09:57:43.229583  300017 start.go:256] writing updated cluster config ...
	I1123 09:57:43.229975  300017 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:43.235596  300017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:43.243251  300017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.248984  300017 pod_ready.go:94] pod "coredns-66bc5c9577-8dgc7" is "Ready"
	I1123 09:57:43.249015  300017 pod_ready.go:86] duration metric: took 5.729453ms for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.251635  300017 pod_ready.go:83] waiting for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.256613  300017 pod_ready.go:94] pod "etcd-embed-certs-412583" is "Ready"
	I1123 09:57:43.256645  300017 pod_ready.go:86] duration metric: took 4.984583ms for pod "etcd-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.259023  300017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.264242  300017 pod_ready.go:94] pod "kube-apiserver-embed-certs-412583" is "Ready"
	I1123 09:57:43.264273  300017 pod_ready.go:86] duration metric: took 5.223434ms for pod "kube-apiserver-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.311182  300017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.642602  300017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-412583" is "Ready"
	I1123 09:57:43.642637  300017 pod_ready.go:86] duration metric: took 331.426321ms for pod "kube-controller-manager-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:43.843849  300017 pod_ready.go:83] waiting for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.244623  300017 pod_ready.go:94] pod "kube-proxy-wm7k2" is "Ready"
	I1123 09:57:44.244667  300017 pod_ready.go:86] duration metric: took 400.77745ms for pod "kube-proxy-wm7k2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.444056  300017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.843963  300017 pod_ready.go:94] pod "kube-scheduler-embed-certs-412583" is "Ready"
	I1123 09:57:44.843992  300017 pod_ready.go:86] duration metric: took 399.904179ms for pod "kube-scheduler-embed-certs-412583" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:44.844006  300017 pod_ready.go:40] duration metric: took 1.608365258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:44.891853  300017 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:44.964864  300017 out.go:179] * Done! kubectl is now configured to use "embed-certs-412583" cluster and "default" namespace by default
	W1123 09:57:41.488122  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	W1123 09:57:43.488201  296642 node_ready.go:57] node "no-preload-309734" has "Ready":"False" status (will retry)
	I1123 09:57:43.988019  296642 node_ready.go:49] node "no-preload-309734" is "Ready"
	I1123 09:57:43.988052  296642 node_ready.go:38] duration metric: took 14.003534589s for node "no-preload-309734" to be "Ready" ...
	I1123 09:57:43.988069  296642 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:57:43.988149  296642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:57:44.008503  296642 api_server.go:72] duration metric: took 14.434117996s to wait for apiserver process to appear ...
	I1123 09:57:44.008530  296642 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:57:44.008551  296642 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:57:44.017109  296642 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:57:44.018176  296642 api_server.go:141] control plane version: v1.34.1
	I1123 09:57:44.018200  296642 api_server.go:131] duration metric: took 9.663468ms to wait for apiserver health ...
	I1123 09:57:44.018208  296642 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:57:44.022287  296642 system_pods.go:59] 8 kube-system pods found
	I1123 09:57:44.022324  296642 system_pods.go:61] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.022351  296642 system_pods.go:61] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.022364  296642 system_pods.go:61] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.022369  296642 system_pods.go:61] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.022375  296642 system_pods.go:61] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.022381  296642 system_pods.go:61] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.022387  296642 system_pods.go:61] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.022397  296642 system_pods.go:61] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.022406  296642 system_pods.go:74] duration metric: took 4.191598ms to wait for pod list to return data ...
	I1123 09:57:44.022421  296642 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:57:44.025262  296642 default_sa.go:45] found service account: "default"
	I1123 09:57:44.025287  296642 default_sa.go:55] duration metric: took 2.858313ms for default service account to be created ...
	I1123 09:57:44.025300  296642 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:57:44.028240  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.028269  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.028275  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.028281  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.028285  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.028289  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.028293  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.028296  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.028300  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.028346  296642 retry.go:31] will retry after 283.472429ms: missing components: kube-dns
	I1123 09:57:44.317300  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.317353  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.317361  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.317370  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.317376  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.317382  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.317387  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.317391  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.317397  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.317416  296642 retry.go:31] will retry after 321.7427ms: missing components: kube-dns
	I1123 09:57:44.689277  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:44.689322  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:44.689344  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:44.689353  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:44.689359  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:44.689366  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:44.689370  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:44.689375  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:44.689382  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:44.689411  296642 retry.go:31] will retry after 353.961831ms: missing components: kube-dns
	I1123 09:57:45.048995  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.049060  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.049069  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.049078  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.049084  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.049090  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.049099  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.049104  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.049116  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.049135  296642 retry.go:31] will retry after 412.630882ms: missing components: kube-dns
	I1123 09:57:45.607770  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:45.607816  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:57:45.607826  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:45.607836  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:45.607841  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:45.607847  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:45.607851  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:45.607856  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:45.607873  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:57:45.607891  296642 retry.go:31] will retry after 544.365573ms: missing components: kube-dns
	I1123 09:57:41.425584  311138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:57:41.425893  311138 start.go:159] libmachine.API.Create for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:41.425945  311138 client.go:173] LocalClient.Create starting
	I1123 09:57:41.426056  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem
	I1123 09:57:41.426100  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426121  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426185  311138 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem
	I1123 09:57:41.426208  311138 main.go:143] libmachine: Decoding PEM data...
	I1123 09:57:41.426217  311138 main.go:143] libmachine: Parsing certificate...
	I1123 09:57:41.426608  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:57:41.445568  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:57:41.445670  311138 network_create.go:284] running [docker network inspect default-k8s-diff-port-696492] to gather additional debugging logs...
	I1123 09:57:41.445697  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492
	W1123 09:57:41.465174  311138 cli_runner.go:211] docker network inspect default-k8s-diff-port-696492 returned with exit code 1
	I1123 09:57:41.465216  311138 network_create.go:287] error running [docker network inspect default-k8s-diff-port-696492]: docker network inspect default-k8s-diff-port-696492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-696492 not found
	I1123 09:57:41.465236  311138 network_create.go:289] output of [docker network inspect default-k8s-diff-port-696492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-696492 not found
	
	** /stderr **
	I1123 09:57:41.465403  311138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:41.487255  311138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-de5cba392bb4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8d:f5:88:bc:8b} reservation:<nil>}
	I1123 09:57:41.488105  311138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e2eabbe85d5b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:da:f4:02:bd:23:31} reservation:<nil>}
	I1123 09:57:41.489037  311138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-22e47e96d08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:9e:83:f9:9f:f6} reservation:<nil>}
	I1123 09:57:41.489614  311138 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4fa988beb7cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:18:12:be:77:f6} reservation:<nil>}
	I1123 09:57:41.492079  311138 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d80820}
	I1123 09:57:41.492121  311138 network_create.go:124] attempt to create docker network default-k8s-diff-port-696492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 09:57:41.492171  311138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 default-k8s-diff-port-696492
	I1123 09:57:41.554538  311138 network_create.go:108] docker network default-k8s-diff-port-696492 192.168.85.0/24 created
	I1123 09:57:41.554588  311138 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-696492" container
	I1123 09:57:41.554664  311138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:57:41.575522  311138 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-696492 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:57:41.598058  311138 oci.go:103] Successfully created a docker volume default-k8s-diff-port-696492
	I1123 09:57:41.598141  311138 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-696492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --entrypoint /usr/bin/test -v default-k8s-diff-port-696492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:57:42.041176  311138 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-696492
	I1123 09:57:42.041254  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:42.041269  311138 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:57:42.041325  311138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:57:46.265821  296642 system_pods.go:86] 8 kube-system pods found
	I1123 09:57:46.265851  296642 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running
	I1123 09:57:46.265856  296642 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running
	I1123 09:57:46.265860  296642 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running
	I1123 09:57:46.265863  296642 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running
	I1123 09:57:46.265868  296642 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running
	I1123 09:57:46.265870  296642 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:57:46.265875  296642 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running
	I1123 09:57:46.265879  296642 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running
	I1123 09:57:46.265889  296642 system_pods.go:126] duration metric: took 2.240582653s to wait for k8s-apps to be running ...
	I1123 09:57:46.265903  296642 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:57:46.265972  296642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:57:46.283075  296642 system_svc.go:56] duration metric: took 17.161056ms WaitForService to wait for kubelet
	I1123 09:57:46.283105  296642 kubeadm.go:587] duration metric: took 16.70872571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:57:46.283128  296642 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:57:46.491444  296642 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:57:46.491473  296642 node_conditions.go:123] node cpu capacity is 8
	I1123 09:57:46.491486  296642 node_conditions.go:105] duration metric: took 208.353263ms to run NodePressure ...
	I1123 09:57:46.491509  296642 start.go:242] waiting for startup goroutines ...
	I1123 09:57:46.491520  296642 start.go:247] waiting for cluster config update ...
	I1123 09:57:46.491533  296642 start.go:256] writing updated cluster config ...
	I1123 09:57:46.491804  296642 ssh_runner.go:195] Run: rm -f paused
	I1123 09:57:46.498152  296642 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:46.503240  296642 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.508998  296642 pod_ready.go:94] pod "coredns-66bc5c9577-sx25q" is "Ready"
	I1123 09:57:46.509028  296642 pod_ready.go:86] duration metric: took 5.757344ms for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.512072  296642 pod_ready.go:83] waiting for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.517750  296642 pod_ready.go:94] pod "etcd-no-preload-309734" is "Ready"
	I1123 09:57:46.517777  296642 pod_ready.go:86] duration metric: took 5.673234ms for pod "etcd-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.520446  296642 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.525480  296642 pod_ready.go:94] pod "kube-apiserver-no-preload-309734" is "Ready"
	I1123 09:57:46.525513  296642 pod_ready.go:86] duration metric: took 5.036877ms for pod "kube-apiserver-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.528196  296642 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:46.902790  296642 pod_ready.go:94] pod "kube-controller-manager-no-preload-309734" is "Ready"
	I1123 09:57:46.902815  296642 pod_ready.go:86] duration metric: took 374.588413ms for pod "kube-controller-manager-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.104263  296642 pod_ready.go:83] waiting for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.504876  296642 pod_ready.go:94] pod "kube-proxy-jpvhc" is "Ready"
	I1123 09:57:47.504999  296642 pod_ready.go:86] duration metric: took 400.696383ms for pod "kube-proxy-jpvhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:47.706275  296642 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104684  296642 pod_ready.go:94] pod "kube-scheduler-no-preload-309734" is "Ready"
	I1123 09:57:48.104720  296642 pod_ready.go:86] duration metric: took 398.41369ms for pod "kube-scheduler-no-preload-309734" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:57:48.104739  296642 pod_ready.go:40] duration metric: took 1.606531718s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:57:48.181507  296642 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:57:48.183959  296642 out.go:179] * Done! kubectl is now configured to use "no-preload-309734" cluster and "default" namespace by default
	I1123 09:57:46.740944  311138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-696492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.699532205s)
	I1123 09:57:46.741010  311138 kic.go:203] duration metric: took 4.699734046s to extract preloaded images to volume ...
	W1123 09:57:46.741179  311138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:57:46.741234  311138 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:57:46.741304  311138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:57:46.807009  311138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-696492 --name default-k8s-diff-port-696492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-696492 --network default-k8s-diff-port-696492 --ip 192.168.85.2 --volume default-k8s-diff-port-696492:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:57:47.199589  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Running}}
	I1123 09:57:47.220655  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.242623  311138 cli_runner.go:164] Run: docker exec default-k8s-diff-port-696492 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:57:47.295743  311138 oci.go:144] the created container "default-k8s-diff-port-696492" has a running status.
	I1123 09:57:47.295783  311138 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa...
	I1123 09:57:47.562280  311138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:57:47.611801  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.650055  311138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:57:47.650078  311138 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-696492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:57:47.733580  311138 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-696492 --format={{.State.Status}}
	I1123 09:57:47.763876  311138 machine.go:94] provisionDockerMachine start ...
	I1123 09:57:47.763997  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.798484  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.798947  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.798969  311138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:57:47.966787  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:47.966822  311138 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-696492"
	I1123 09:57:47.966888  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:47.993804  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:47.994099  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:47.994117  311138 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-696492 && echo "default-k8s-diff-port-696492" | sudo tee /etc/hostname
	I1123 09:57:48.174661  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-696492
	
	I1123 09:57:48.174752  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.203529  311138 main.go:143] libmachine: Using SSH client type: native
	I1123 09:57:48.203843  311138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1123 09:57:48.203881  311138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-696492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-696492/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-696492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:57:48.379959  311138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:57:48.380002  311138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3552/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3552/.minikube}
	I1123 09:57:48.380096  311138 ubuntu.go:190] setting up certificates
	I1123 09:57:48.380127  311138 provision.go:84] configureAuth start
	I1123 09:57:48.380222  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.421922  311138 provision.go:143] copyHostCerts
	I1123 09:57:48.422045  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem, removing ...
	I1123 09:57:48.422074  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem
	I1123 09:57:48.422196  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem (1679 bytes)
	I1123 09:57:48.422353  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem, removing ...
	I1123 09:57:48.422365  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem
	I1123 09:57:48.422399  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem (1082 bytes)
	I1123 09:57:48.422467  311138 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem, removing ...
	I1123 09:57:48.422523  311138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem
	I1123 09:57:48.422566  311138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem (1123 bytes)
	I1123 09:57:48.422642  311138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-696492 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-696492 localhost minikube]
	I1123 09:57:48.539621  311138 provision.go:177] copyRemoteCerts
	I1123 09:57:48.539708  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:57:48.539762  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.564284  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.677154  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:57:48.704807  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 09:57:48.730566  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:57:48.755362  311138 provision.go:87] duration metric: took 375.193527ms to configureAuth
	I1123 09:57:48.755396  311138 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:57:48.755732  311138 config.go:182] Loaded profile config "default-k8s-diff-port-696492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:57:48.755752  311138 machine.go:97] duration metric: took 991.839554ms to provisionDockerMachine
	I1123 09:57:48.755762  311138 client.go:176] duration metric: took 7.329805852s to LocalClient.Create
	I1123 09:57:48.755786  311138 start.go:167] duration metric: took 7.329894759s to libmachine.API.Create "default-k8s-diff-port-696492"
	I1123 09:57:48.755799  311138 start.go:293] postStartSetup for "default-k8s-diff-port-696492" (driver="docker")
	I1123 09:57:48.755811  311138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:57:48.755868  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:57:48.755919  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.784317  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:48.901734  311138 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:57:48.906292  311138 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:57:48.906325  311138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:57:48.906355  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/addons for local assets ...
	I1123 09:57:48.906577  311138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/files for local assets ...
	I1123 09:57:48.906715  311138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem -> 71092.pem in /etc/ssl/certs
	I1123 09:57:48.906835  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:57:48.917431  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:57:48.947477  311138 start.go:296] duration metric: took 191.661634ms for postStartSetup
	I1123 09:57:48.947957  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:48.973141  311138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/config.json ...
	I1123 09:57:48.973692  311138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:57:48.973751  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:48.996029  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.106682  311138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:57:49.112230  311138 start.go:128] duration metric: took 7.689569326s to createHost
	I1123 09:57:49.112259  311138 start.go:83] releasing machines lock for "default-k8s-diff-port-696492", held for 7.689795634s
	I1123 09:57:49.112351  311138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-696492
	I1123 09:57:49.135976  311138 ssh_runner.go:195] Run: cat /version.json
	I1123 09:57:49.136033  311138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:57:49.136042  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.136113  311138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-696492
	I1123 09:57:49.160077  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.161278  311138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/default-k8s-diff-port-696492/id_rsa Username:docker}
	I1123 09:57:49.264125  311138 ssh_runner.go:195] Run: systemctl --version
	I1123 09:57:49.329282  311138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:57:49.335197  311138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:57:49.335268  311138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:57:49.366357  311138 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:57:49.366380  311138 start.go:496] detecting cgroup driver to use...
	I1123 09:57:49.366416  311138 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:57:49.366470  311138 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 09:57:49.383235  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 09:57:49.399768  311138 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:57:49.399842  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:57:49.420125  311138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:57:49.442300  311138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:57:49.541498  311138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:57:49.659194  311138 docker.go:234] disabling docker service ...
	I1123 09:57:49.659272  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:57:49.682070  311138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:57:49.698015  311138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:57:49.798105  311138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:57:49.894575  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:57:49.911733  311138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:57:49.931314  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 09:57:49.945424  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 09:57:49.956889  311138 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 09:57:49.956953  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 09:57:49.967923  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:49.979575  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 09:57:49.991202  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:57:50.002918  311138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:57:50.015086  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 09:57:50.027588  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 09:57:50.038500  311138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 09:57:50.050508  311138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:57:50.060907  311138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:57:50.069882  311138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:57:50.169936  311138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 09:57:50.287676  311138 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 09:57:50.287747  311138 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 09:57:50.292388  311138 start.go:564] Will wait 60s for crictl version
	I1123 09:57:50.292450  311138 ssh_runner.go:195] Run: which crictl
	I1123 09:57:50.296873  311138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:57:50.325533  311138 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 09:57:50.325605  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.350974  311138 ssh_runner.go:195] Run: containerd --version
	I1123 09:57:50.381808  311138 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 09:57:50.383456  311138 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-696492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:57:50.407801  311138 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 09:57:50.413000  311138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:57:50.425563  311138 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:57:50.425681  311138 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:57:50.425728  311138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:57:50.458513  311138 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:57:50.458540  311138 containerd.go:534] Images already preloaded, skipping extraction
	I1123 09:57:50.458578  311138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:57:50.490466  311138 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:57:50.490488  311138 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:57:50.490496  311138 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 09:57:50.490604  311138 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-696492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:57:50.490683  311138 ssh_runner.go:195] Run: sudo crictl info
	I1123 09:57:50.519013  311138 cni.go:84] Creating CNI manager for ""
	I1123 09:57:50.519047  311138 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:57:50.519066  311138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:57:50.519093  311138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-696492 NodeName:default-k8s-diff-port-696492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:57:50.519249  311138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-696492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:57:50.519326  311138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:57:50.531186  311138 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:57:50.531258  311138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:57:50.540764  311138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 09:57:50.556738  311138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:57:50.577978  311138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1123 09:57:50.594432  311138 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:57:50.598984  311138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:57:50.611087  311138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:57:50.713969  311138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:57:50.738999  311138 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492 for IP: 192.168.85.2
	I1123 09:57:50.739022  311138 certs.go:195] generating shared ca certs ...
	I1123 09:57:50.739042  311138 certs.go:227] acquiring lock for ca certs: {Name:mkf0ec2efb8866dd9406da39e0a5f5dc931fd377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.739203  311138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key
	I1123 09:57:50.739256  311138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key
	I1123 09:57:50.739271  311138 certs.go:257] generating profile certs ...
	I1123 09:57:50.739364  311138 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key
	I1123 09:57:50.739382  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt with IP's: []
	I1123 09:57:50.902937  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt ...
	I1123 09:57:50.902975  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.crt: {Name:mk1be782fc73373be310b15837c277ec6685e2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.903176  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key ...
	I1123 09:57:50.903195  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/client.key: {Name:mk6db5327a581ec783720f15c44b3730584ff35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:50.903326  311138 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1
	I1123 09:57:50.903367  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 09:57:51.007041  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 ...
	I1123 09:57:51.007079  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1: {Name:mk4d1a5fa60f123a8319b137c9ec74f1fa189955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.007285  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1 ...
	I1123 09:57:51.007298  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1: {Name:mkdd2b300e22459c4a8968bc56aef3e76c8f86f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.007514  311138 certs.go:382] copying /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt.0c4255b1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt
	I1123 09:57:51.007636  311138 certs.go:386] copying /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key.0c4255b1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key
	I1123 09:57:51.007701  311138 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key
	I1123 09:57:51.007715  311138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt with IP's: []
	I1123 09:57:51.045607  311138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt ...
	I1123 09:57:51.045642  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt: {Name:mkb29252ee6ba2f8bc8fb350259fbc7d524b689b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.045864  311138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key ...
	I1123 09:57:51.045887  311138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key: {Name:mk39c6b0c10f773b67a0a811d41c76d128d66647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:57:51.046116  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem (1338 bytes)
	W1123 09:57:51.046161  311138 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109_empty.pem, impossibly tiny 0 bytes
	I1123 09:57:51.046173  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:57:51.046197  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:57:51.046222  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:57:51.046245  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem (1679 bytes)
	I1123 09:57:51.046287  311138 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:57:51.047046  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:57:51.071141  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:57:51.092546  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:57:51.116776  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:57:51.139235  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:57:51.160968  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:57:51.181315  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:57:51.203122  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/default-k8s-diff-port-696492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:57:51.226401  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:57:51.252100  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem --> /usr/share/ca-certificates/7109.pem (1338 bytes)
	I1123 09:57:51.274287  311138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /usr/share/ca-certificates/71092.pem (1708 bytes)
	I1123 09:57:51.297105  311138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:57:51.313841  311138 ssh_runner.go:195] Run: openssl version
	I1123 09:57:51.322431  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:57:51.335037  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.339776  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.339848  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:57:51.383842  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:57:51.395820  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7109.pem && ln -fs /usr/share/ca-certificates/7109.pem /etc/ssl/certs/7109.pem"
	I1123 09:57:51.406811  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.411731  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:26 /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.411802  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7109.pem
	I1123 09:57:51.456262  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7109.pem /etc/ssl/certs/51391683.0"
	I1123 09:57:51.467466  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71092.pem && ln -fs /usr/share/ca-certificates/71092.pem /etc/ssl/certs/71092.pem"
	I1123 09:57:51.479299  311138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.484434  311138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:26 /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.484508  311138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71092.pem
	I1123 09:57:51.525183  311138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71092.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:57:51.535904  311138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:57:51.540741  311138 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:57:51.540806  311138 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-696492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-696492 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:57:51.540889  311138 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 09:57:51.540937  311138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:57:51.573411  311138 cri.go:89] found id: ""
	I1123 09:57:51.573483  311138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:57:51.583208  311138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:57:51.592170  311138 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:57:51.592237  311138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:57:51.601224  311138 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:57:51.601243  311138 kubeadm.go:158] found existing configuration files:
	
	I1123 09:57:51.601292  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 09:57:51.610806  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:57:51.610871  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:57:51.619590  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 09:57:51.628676  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:57:51.628753  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:57:51.638382  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 09:57:51.648357  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:57:51.648452  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:57:51.657606  311138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 09:57:51.667094  311138 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:57:51.667160  311138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:57:51.677124  311138 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:57:51.753028  311138 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 09:57:51.832851  311138 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	83b803e375a11       56cc512116c8f       9 seconds ago       Running             busybox                   0                   08fea159e192e       busybox                                     default
	6d27e56eea5cb       52546a367cc9e       15 seconds ago      Running             coredns                   0                   c35b50f299022       coredns-66bc5c9577-sx25q                    kube-system
	103095b7989ee       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   f875236ef29c4       storage-provisioner                         kube-system
	5c49f9103fd4c       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   52e89975c29a3       kindnet-d6zbp                               kube-system
	b1f2f40f83352       fc25172553d79       30 seconds ago      Running             kube-proxy                0                   3c931f4ebe3b6       kube-proxy-jpvhc                            kube-system
	d13615209a18d       c80c8dbafe7dd       41 seconds ago      Running             kube-controller-manager   0                   9b3682a73d7c9       kube-controller-manager-no-preload-309734   kube-system
	b7a0f8d20ac46       c3994bc696102       41 seconds ago      Running             kube-apiserver            0                   af6630aa22518       kube-apiserver-no-preload-309734            kube-system
	d3705422907a4       7dd6aaa1717ab       41 seconds ago      Running             kube-scheduler            0                   001d285d1626c       kube-scheduler-no-preload-309734            kube-system
	a81288f6ae55b       5f1f5298c888d       41 seconds ago      Running             etcd                      0                   7c2a74ce9f993       etcd-no-preload-309734                      kube-system
	
	
	==> containerd <==
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.437664858Z" level=info msg="CreateContainer within sandbox \"f875236ef29c4dcaca84613fe0d3342cd15f562c1b6c450727f815a45d23abec\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8\""
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.438966110Z" level=info msg="StartContainer for \"103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8\""
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.441596627Z" level=info msg="Container 6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.442688190Z" level=info msg="connecting to shim 103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8" address="unix:///run/containerd/s/e3c90dc88ed2011a17e06013960c4ff36dcd5e5c4c0b472e967ab7c541e7cc59" protocol=ttrpc version=3
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.452070718Z" level=info msg="CreateContainer within sandbox \"c35b50f29902262db0930fff1232f8a0750b061fc8c644ff40065e2189b7a0c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7\""
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.452928128Z" level=info msg="StartContainer for \"6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7\""
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.454511893Z" level=info msg="connecting to shim 6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7" address="unix:///run/containerd/s/57a86b3d1f07aaee01b72ba5832cca7be61629982786bb2793fc5b74a12bbf4c" protocol=ttrpc version=3
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.687659950Z" level=info msg="StartContainer for \"6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7\" returns successfully"
	Nov 23 09:57:44 no-preload-309734 containerd[656]: time="2025-11-23T09:57:44.690525274Z" level=info msg="StartContainer for \"103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8\" returns successfully"
	Nov 23 09:57:48 no-preload-309734 containerd[656]: time="2025-11-23T09:57:48.717380764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:8d46a619-a382-4103-900c-1ce2911f6fb9,Namespace:default,Attempt:0,}"
	Nov 23 09:57:48 no-preload-309734 containerd[656]: time="2025-11-23T09:57:48.775104419Z" level=info msg="connecting to shim 08fea159e192e081b068d0606fe4a52cb2c890cdcf80e6514527a0c123f207a8" address="unix:///run/containerd/s/5259430685db90287109d0f7c347cef09803959202e1e931a6a2771afa7e7192" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:57:48 no-preload-309734 containerd[656]: time="2025-11-23T09:57:48.856379941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:8d46a619-a382-4103-900c-1ce2911f6fb9,Namespace:default,Attempt:0,} returns sandbox id \"08fea159e192e081b068d0606fe4a52cb2c890cdcf80e6514527a0c123f207a8\""
	Nov 23 09:57:48 no-preload-309734 containerd[656]: time="2025-11-23T09:57:48.858825417Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.962286000Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.963095464Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.964577847Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.966619514Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.967057622Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.108184767s"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.967095781Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.973882386Z" level=info msg="CreateContainer within sandbox \"08fea159e192e081b068d0606fe4a52cb2c890cdcf80e6514527a0c123f207a8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.985011283Z" level=info msg="Container 83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.992752698Z" level=info msg="CreateContainer within sandbox \"08fea159e192e081b068d0606fe4a52cb2c890cdcf80e6514527a0c123f207a8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e\""
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.993498398Z" level=info msg="StartContainer for \"83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e\""
	Nov 23 09:57:50 no-preload-309734 containerd[656]: time="2025-11-23T09:57:50.994585188Z" level=info msg="connecting to shim 83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e" address="unix:///run/containerd/s/5259430685db90287109d0f7c347cef09803959202e1e931a6a2771afa7e7192" protocol=ttrpc version=3
	Nov 23 09:57:51 no-preload-309734 containerd[656]: time="2025-11-23T09:57:51.056866970Z" level=info msg="StartContainer for \"83b803e375a11888348fd2bbcd5084e6b4b80efb2a13b2236d002edd28b3958e\" returns successfully"
	
	
	==> coredns [6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35403 - 63133 "HINFO IN 8016908280927694689.584937637230355027. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.034548045s
	
	
	==> describe nodes <==
	Name:               no-preload-309734
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-309734
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-309734
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_57_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:57:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-309734
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:57:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:57:55 +0000   Sun, 23 Nov 2025 09:57:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:57:55 +0000   Sun, 23 Nov 2025 09:57:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:57:55 +0000   Sun, 23 Nov 2025 09:57:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:57:55 +0000   Sun, 23 Nov 2025 09:57:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-309734
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3f1b400d-a81e-4472-94b0-c48cd427d30f
	  Boot ID:                    e4c4d39b-bebd-4037-9237-26b945dbe084
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-sx25q                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-no-preload-309734                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-d6zbp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-309734             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-309734    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-jpvhc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-309734             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 36s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  36s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  36s   kubelet          Node no-preload-309734 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s   kubelet          Node no-preload-309734 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s   kubelet          Node no-preload-309734 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s   node-controller  Node no-preload-309734 event: Registered Node no-preload-309734 in Controller
	  Normal  NodeReady                17s   kubelet          Node no-preload-309734 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.288463] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[Nov23 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.195562] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +5.912917] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 c0 1c 98 33 a9 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.002091] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 47 bd bf 96 57 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[  +4.460318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 85 b9 91 f8 a4 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +2.904694] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	[Nov23 09:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 48 bf 8b d1 fc 08 06
	[  +0.000931] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	
	
	==> etcd [a81288f6ae55b6a042b8f67e3e9eedfe1c61dd371e39e06133e14aee6f968eb3] <==
	{"level":"info","ts":"2025-11-23T09:57:45.604990Z","caller":"traceutil/trace.go:172","msg":"trace[729928754] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"254.824024ms","start":"2025-11-23T09:57:45.350149Z","end":"2025-11-23T09:57:45.604973Z","steps":["trace[729928754] 'process raft request'  (duration: 254.674341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:45.605011Z","caller":"traceutil/trace.go:172","msg":"trace[1938700971] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:411; }","duration":"140.550391ms","start":"2025-11-23T09:57:45.464447Z","end":"2025-11-23T09:57:45.604997Z","steps":["trace[1938700971] 'agreement among raft nodes before linearized reading'  (duration: 140.381152ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:45.941511Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.788183ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766362597583908 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" mod_revision:297 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" value_size:1199 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:57:45.941745Z","caller":"traceutil/trace.go:172","msg":"trace[1743399975] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"326.123128ms","start":"2025-11-23T09:57:45.615609Z","end":"2025-11-23T09:57:45.941732Z","steps":["trace[1743399975] 'process raft request'  (duration: 326.076119ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:45.941842Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.615584Z","time spent":"326.215431ms","remote":"127.0.0.1:44416","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5713,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-sx25q\" mod_revision:412 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-sx25q\" value_size:5654 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-sx25q\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:45.941864Z","caller":"traceutil/trace.go:172","msg":"trace[1936674735] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"332.279773ms","start":"2025-11-23T09:57:45.609561Z","end":"2025-11-23T09:57:45.941841Z","steps":["trace[1936674735] 'process raft request'  (duration: 79.578516ms)","trace[1936674735] 'compare'  (duration: 251.670387ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:45.942006Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.609541Z","time spent":"332.400937ms","remote":"127.0.0.1:44736","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1258,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" mod_revision:297 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" value_size:1199 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-486tp\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:45.941918Z","caller":"traceutil/trace.go:172","msg":"trace[8509950] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"332.190987ms","start":"2025-11-23T09:57:45.609715Z","end":"2025-11-23T09:57:45.941906Z","steps":["trace[8509950] 'process raft request'  (duration: 331.895647ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:45.942189Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.609702Z","time spent":"332.438504ms","remote":"127.0.0.1:44318","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":891,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:321 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:834 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"warn","ts":"2025-11-23T09:57:46.262257Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.656054ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766362597583913 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:414 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:834 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:57:46.262452Z","caller":"traceutil/trace.go:172","msg":"trace[1069729271] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"315.994309ms","start":"2025-11-23T09:57:45.946434Z","end":"2025-11-23T09:57:46.262428Z","steps":["trace[1069729271] 'process raft request'  (duration: 165.67524ms)","trace[1069729271] 'compare'  (duration: 149.449024ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:46.262562Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.946263Z","time spent":"316.246576ms","remote":"127.0.0.1:44318","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":891,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:414 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:834 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:46.262599Z","caller":"traceutil/trace.go:172","msg":"trace[695268418] linearizableReadLoop","detail":"{readStateIndex:435; appliedIndex:432; }","duration":"108.431625ms","start":"2025-11-23T09:57:46.154153Z","end":"2025-11-23T09:57:46.262584Z","steps":["trace[695268418] 'read index received'  (duration: 51.519µs)","trace[695268418] 'applied index is now lower than readState.Index'  (duration: 108.379558ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:46.262799Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.64839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:46.262885Z","caller":"traceutil/trace.go:172","msg":"trace[310550640] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:419; }","duration":"108.737789ms","start":"2025-11-23T09:57:46.154136Z","end":"2025-11-23T09:57:46.262874Z","steps":["trace[310550640] 'agreement among raft nodes before linearized reading'  (duration: 108.564956ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:46.262817Z","caller":"traceutil/trace.go:172","msg":"trace[1142413134] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"311.542847ms","start":"2025-11-23T09:57:45.951257Z","end":"2025-11-23T09:57:46.262800Z","steps":["trace[1142413134] 'process raft request'  (duration: 311.251022ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:46.263729Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.951238Z","time spent":"312.436616ms","remote":"127.0.0.1:44416","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4275,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:402 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:4221 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:46.262828Z","caller":"traceutil/trace.go:172","msg":"trace[590993168] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"313.957864ms","start":"2025-11-23T09:57:45.948856Z","end":"2025-11-23T09:57:46.262814Z","steps":["trace[590993168] 'process raft request'  (duration: 313.554765ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:57:46.263848Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:57:45.948835Z","time spent":"314.949298ms","remote":"127.0.0.1:45572","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4134,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:363 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:4074 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"info","ts":"2025-11-23T09:57:46.434432Z","caller":"traceutil/trace.go:172","msg":"trace[1042096307] linearizableReadLoop","detail":"{readStateIndex:435; appliedIndex:435; }","duration":"154.238935ms","start":"2025-11-23T09:57:46.280166Z","end":"2025-11-23T09:57:46.434405Z","steps":["trace[1042096307] 'read index received'  (duration: 154.154392ms)","trace[1042096307] 'applied index is now lower than readState.Index'  (duration: 79.147µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:46.489436Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"209.253474ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:46.489506Z","caller":"traceutil/trace.go:172","msg":"trace[1882503074] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:419; }","duration":"209.334681ms","start":"2025-11-23T09:57:46.280154Z","end":"2025-11-23T09:57:46.489489Z","steps":["trace[1882503074] 'agreement among raft nodes before linearized reading'  (duration: 154.347258ms)","trace[1882503074] 'range keys from in-memory index tree'  (duration: 54.884092ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:57:46.489532Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.811021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:57:46.489569Z","caller":"traceutil/trace.go:172","msg":"trace[2032908186] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:420; }","duration":"204.853126ms","start":"2025-11-23T09:57:46.284706Z","end":"2025-11-23T09:57:46.489559Z","steps":["trace[2032908186] 'agreement among raft nodes before linearized reading'  (duration: 204.788988ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:57:46.489526Z","caller":"traceutil/trace.go:172","msg":"trace[506296957] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"219.451231ms","start":"2025-11-23T09:57:46.270057Z","end":"2025-11-23T09:57:46.489509Z","steps":["trace[506296957] 'process raft request'  (duration: 164.324633ms)","trace[506296957] 'compare'  (duration: 54.991634ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:58:00 up 40 min,  0 user,  load average: 4.93, 4.11, 2.62
	Linux no-preload-309734 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c49f9103fd4c18deec14e3758e958db34380a181d3ea11344ed107acc94faab] <==
	I1123 09:57:33.661564       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:57:33.661882       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 09:57:33.662065       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:57:33.662081       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:57:33.662111       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:57:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:57:33.914181       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:57:33.914227       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:57:33.914238       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:57:33.914423       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:57:34.259526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:57:34.259590       1 metrics.go:72] Registering metrics
	I1123 09:57:34.259697       1 controller.go:711] "Syncing nftables rules"
	I1123 09:57:43.914914       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:57:43.914995       1 main.go:301] handling current node
	I1123 09:57:53.910821       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:57:53.910866       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b7a0f8d20ac463989e63a3565c249816e2e20c9067287e9f2b8c3db6cfb05aab] <==
	E1123 09:57:21.143080       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 09:57:21.255182       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:21.255412       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:57:21.279519       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:57:21.279712       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:57:21.279867       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:21.348092       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:57:22.025838       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:57:22.034539       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:57:22.034566       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:57:22.997425       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:57:23.053253       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:57:23.232998       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:57:23.242170       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 09:57:23.243794       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:57:23.250061       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:57:23.336386       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:57:24.347834       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:57:24.360466       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:57:24.368827       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:57:29.096206       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:29.104211       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:57:29.392865       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:57:29.438704       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:57:56.530610       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:56958: use of closed network connection
	
	
	==> kube-controller-manager [d13615209a18dd7b287968a7f98989bb3ce87db942b906988e39fde11c294cce] <==
	I1123 09:57:28.346104       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:57:28.346123       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:57:28.358516       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-309734" podCIDRs=["10.244.0.0/24"]
	I1123 09:57:28.370402       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:57:28.370448       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:57:28.376787       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:57:28.384459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:57:28.384606       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:57:28.384970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:57:28.385801       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:57:28.385822       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:57:28.385853       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:57:28.385872       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:57:28.385948       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:57:28.386483       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:57:28.387261       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:57:28.387290       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:57:28.387296       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:57:28.387373       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:57:28.387426       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:57:28.387764       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:57:28.388909       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:57:28.390893       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:57:28.398493       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:57:48.339554       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b1f2f40f833522a80b40c076eb2228ca8ab64af5ae29ec412679554033ccf342] <==
	I1123 09:57:30.225772       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:57:30.326216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:57:30.428019       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:57:30.428069       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 09:57:30.428155       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:57:30.470994       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:57:30.471157       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:57:30.480600       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:57:30.481164       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:57:30.481259       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:57:30.483774       1 config.go:309] "Starting node config controller"
	I1123 09:57:30.483932       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:57:30.483965       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:57:30.483886       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:57:30.484009       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:57:30.483832       1 config.go:200] "Starting service config controller"
	I1123 09:57:30.485261       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:57:30.483852       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:57:30.485625       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:57:30.584426       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:57:30.585604       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:57:30.585724       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d3705422907a474de42f4b2ba1fea7490c10e3083855a79fad006ba545fab905] <==
	E1123 09:57:21.323927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:57:21.324568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:57:21.324655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:57:21.324773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:57:21.324762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:57:21.324925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:57:21.325786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:57:21.325813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:57:22.181484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:57:22.216690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:57:22.262145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:57:22.281643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:57:22.288228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:57:22.289460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:57:22.306787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:57:22.453485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:57:22.463201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:57:22.504380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:57:22.518073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:57:22.533460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:57:22.552683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:57:22.587917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:57:22.601681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:57:22.727221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1123 09:57:25.613253       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:57:25 no-preload-309734 kubelet[2135]: I1123 09:57:25.301923    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-309734" podStartSLOduration=1.3019004889999999 podStartE2EDuration="1.301900489s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.301870004 +0000 UTC m=+1.184556442" watchObservedRunningTime="2025-11-23 09:57:25.301900489 +0000 UTC m=+1.184586938"
	Nov 23 09:57:25 no-preload-309734 kubelet[2135]: I1123 09:57:25.343592    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-309734" podStartSLOduration=3.343566116 podStartE2EDuration="3.343566116s" podCreationTimestamp="2025-11-23 09:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.322167734 +0000 UTC m=+1.204854180" watchObservedRunningTime="2025-11-23 09:57:25.343566116 +0000 UTC m=+1.226252553"
	Nov 23 09:57:25 no-preload-309734 kubelet[2135]: I1123 09:57:25.362057    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-309734" podStartSLOduration=1.3620392049999999 podStartE2EDuration="1.362039205s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.344940934 +0000 UTC m=+1.227627370" watchObservedRunningTime="2025-11-23 09:57:25.362039205 +0000 UTC m=+1.244725642"
	Nov 23 09:57:25 no-preload-309734 kubelet[2135]: I1123 09:57:25.362190    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-309734" podStartSLOduration=1.362179992 podStartE2EDuration="1.362179992s" podCreationTimestamp="2025-11-23 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:25.361953507 +0000 UTC m=+1.244639947" watchObservedRunningTime="2025-11-23 09:57:25.362179992 +0000 UTC m=+1.244866430"
	Nov 23 09:57:28 no-preload-309734 kubelet[2135]: I1123 09:57:28.409253    2135 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:57:28 no-preload-309734 kubelet[2135]: I1123 09:57:28.410053    2135 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.548826    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1c56dde-7af0-49ca-a982-04ae56add5f9-xtables-lock\") pod \"kindnet-d6zbp\" (UID: \"d1c56dde-7af0-49ca-a982-04ae56add5f9\") " pod="kube-system/kindnet-d6zbp"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.548904    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1c56dde-7af0-49ca-a982-04ae56add5f9-lib-modules\") pod \"kindnet-d6zbp\" (UID: \"d1c56dde-7af0-49ca-a982-04ae56add5f9\") " pod="kube-system/kindnet-d6zbp"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.548935    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpq2v\" (UniqueName: \"kubernetes.io/projected/d1c56dde-7af0-49ca-a982-04ae56add5f9-kube-api-access-qpq2v\") pod \"kindnet-d6zbp\" (UID: \"d1c56dde-7af0-49ca-a982-04ae56add5f9\") " pod="kube-system/kindnet-d6zbp"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549020    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb0ab966-23fc-429f-bcfe-eb5726b865be-kube-proxy\") pod \"kube-proxy-jpvhc\" (UID: \"eb0ab966-23fc-429f-bcfe-eb5726b865be\") " pod="kube-system/kube-proxy-jpvhc"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549055    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb0ab966-23fc-429f-bcfe-eb5726b865be-lib-modules\") pod \"kube-proxy-jpvhc\" (UID: \"eb0ab966-23fc-429f-bcfe-eb5726b865be\") " pod="kube-system/kube-proxy-jpvhc"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549078    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvtp\" (UniqueName: \"kubernetes.io/projected/eb0ab966-23fc-429f-bcfe-eb5726b865be-kube-api-access-zxvtp\") pod \"kube-proxy-jpvhc\" (UID: \"eb0ab966-23fc-429f-bcfe-eb5726b865be\") " pod="kube-system/kube-proxy-jpvhc"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549103    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d1c56dde-7af0-49ca-a982-04ae56add5f9-cni-cfg\") pod \"kindnet-d6zbp\" (UID: \"d1c56dde-7af0-49ca-a982-04ae56add5f9\") " pod="kube-system/kindnet-d6zbp"
	Nov 23 09:57:29 no-preload-309734 kubelet[2135]: I1123 09:57:29.549128    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb0ab966-23fc-429f-bcfe-eb5726b865be-xtables-lock\") pod \"kube-proxy-jpvhc\" (UID: \"eb0ab966-23fc-429f-bcfe-eb5726b865be\") " pod="kube-system/kube-proxy-jpvhc"
	Nov 23 09:57:32 no-preload-309734 kubelet[2135]: I1123 09:57:32.926726    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jpvhc" podStartSLOduration=3.926700801 podStartE2EDuration="3.926700801s" podCreationTimestamp="2025-11-23 09:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:30.324860956 +0000 UTC m=+6.207547396" watchObservedRunningTime="2025-11-23 09:57:32.926700801 +0000 UTC m=+8.809387239"
	Nov 23 09:57:37 no-preload-309734 kubelet[2135]: I1123 09:57:37.321200    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-d6zbp" podStartSLOduration=5.317665175 podStartE2EDuration="8.321177483s" podCreationTimestamp="2025-11-23 09:57:29 +0000 UTC" firstStartedPulling="2025-11-23 09:57:30.284577539 +0000 UTC m=+6.167263969" lastFinishedPulling="2025-11-23 09:57:33.288089848 +0000 UTC m=+9.170776277" observedRunningTime="2025-11-23 09:57:34.337086182 +0000 UTC m=+10.219772617" watchObservedRunningTime="2025-11-23 09:57:37.321177483 +0000 UTC m=+13.203863919"
	Nov 23 09:57:43 no-preload-309734 kubelet[2135]: I1123 09:57:43.948176    2135 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:57:44 no-preload-309734 kubelet[2135]: I1123 09:57:44.063563    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b1352952-5fff-47aa-af05-dd6b2078fa39-tmp\") pod \"storage-provisioner\" (UID: \"b1352952-5fff-47aa-af05-dd6b2078fa39\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:44 no-preload-309734 kubelet[2135]: I1123 09:57:44.063643    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50adb46a-6c29-465a-adba-f806eeef81aa-config-volume\") pod \"coredns-66bc5c9577-sx25q\" (UID: \"50adb46a-6c29-465a-adba-f806eeef81aa\") " pod="kube-system/coredns-66bc5c9577-sx25q"
	Nov 23 09:57:44 no-preload-309734 kubelet[2135]: I1123 09:57:44.063673    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brm7p\" (UniqueName: \"kubernetes.io/projected/50adb46a-6c29-465a-adba-f806eeef81aa-kube-api-access-brm7p\") pod \"coredns-66bc5c9577-sx25q\" (UID: \"50adb46a-6c29-465a-adba-f806eeef81aa\") " pod="kube-system/coredns-66bc5c9577-sx25q"
	Nov 23 09:57:44 no-preload-309734 kubelet[2135]: I1123 09:57:44.063774    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9sgg\" (UniqueName: \"kubernetes.io/projected/b1352952-5fff-47aa-af05-dd6b2078fa39-kube-api-access-t9sgg\") pod \"storage-provisioner\" (UID: \"b1352952-5fff-47aa-af05-dd6b2078fa39\") " pod="kube-system/storage-provisioner"
	Nov 23 09:57:45 no-preload-309734 kubelet[2135]: I1123 09:57:45.607001    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sx25q" podStartSLOduration=16.606976312 podStartE2EDuration="16.606976312s" podCreationTimestamp="2025-11-23 09:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:45.606832745 +0000 UTC m=+21.489519183" watchObservedRunningTime="2025-11-23 09:57:45.606976312 +0000 UTC m=+21.489662748"
	Nov 23 09:57:48 no-preload-309734 kubelet[2135]: I1123 09:57:48.393282    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=18.393252975 podStartE2EDuration="18.393252975s" podCreationTimestamp="2025-11-23 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:57:46.264860218 +0000 UTC m=+22.147546655" watchObservedRunningTime="2025-11-23 09:57:48.393252975 +0000 UTC m=+24.275939412"
	Nov 23 09:57:48 no-preload-309734 kubelet[2135]: I1123 09:57:48.499644    2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg7d6\" (UniqueName: \"kubernetes.io/projected/8d46a619-a382-4103-900c-1ce2911f6fb9-kube-api-access-lg7d6\") pod \"busybox\" (UID: \"8d46a619-a382-4103-900c-1ce2911f6fb9\") " pod="default/busybox"
	Nov 23 09:57:51 no-preload-309734 kubelet[2135]: I1123 09:57:51.373809    2135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.264225157 podStartE2EDuration="3.373786874s" podCreationTimestamp="2025-11-23 09:57:48 +0000 UTC" firstStartedPulling="2025-11-23 09:57:48.85844247 +0000 UTC m=+24.741128886" lastFinishedPulling="2025-11-23 09:57:50.968004175 +0000 UTC m=+26.850690603" observedRunningTime="2025-11-23 09:57:51.373424002 +0000 UTC m=+27.256110440" watchObservedRunningTime="2025-11-23 09:57:51.373786874 +0000 UTC m=+27.256473311"
	
	
	==> storage-provisioner [103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8] <==
	I1123 09:57:44.548631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:57:44.557824       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:57:44.557879       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:57:44.562111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:44.686927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:57:44.687140       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:57:44.687422       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d614ab-3709-4e6f-ae73-033d177de3d1", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-309734_1ad0791b-f836-4dd5-a010-1f2702a54569 became leader
	I1123 09:57:44.687583       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-309734_1ad0791b-f836-4dd5-a010-1f2702a54569!
	W1123 09:57:44.690282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:44.749212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:57:44.788474       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-309734_1ad0791b-f836-4dd5-a010-1f2702a54569!
	W1123 09:57:46.753163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:46.761100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:48.765258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:48.773161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:50.776936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:50.781283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:52.785706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:52.791036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:54.795558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:54.801510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:56.805598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:56.810855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:58.815136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:57:58.821012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-309734 -n no-preload-309734
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-309734 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (13.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-696492 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330] Pending
helpers_test.go:352: "busybox" [e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005424922s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-696492 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-696492
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-696492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32",
	        "Created": "2025-11-23T09:57:46.827229115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312188,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:57:46.872164848Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32/hostname",
	        "HostsPath": "/var/lib/docker/containers/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32/hosts",
	        "LogPath": "/var/lib/docker/containers/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32-json.log",
	        "Name": "/default-k8s-diff-port-696492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-696492:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-696492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32",
	                "LowerDir": "/var/lib/docker/overlay2/3bd5d98036ec5cf749b85e9a5093210a965ee5843659df77fdd16ca6b0178a73-init/diff:/var/lib/docker/overlay2/c80a0dfdb81b7753b0a82e2bc6458805cbbad0a9ce5819c63e1d9b7b71ba226c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3bd5d98036ec5cf749b85e9a5093210a965ee5843659df77fdd16ca6b0178a73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3bd5d98036ec5cf749b85e9a5093210a965ee5843659df77fdd16ca6b0178a73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3bd5d98036ec5cf749b85e9a5093210a965ee5843659df77fdd16ca6b0178a73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-696492",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-696492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-696492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-696492",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-696492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c139c5c1061e3186dbf9016bce9aa974edaaef31339f75c4bd78d5704691bbfd",
	            "SandboxKey": "/var/run/docker/netns/c139c5c1061e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-696492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ca961fd9658a4dcdf2dc766f9a71dcbc96f2bd9acb1a01fb0e9f54d16847232",
	                    "EndpointID": "524f07a9a39cab86a5af3cc9a2b50c1fcde9e4f2792e290296190f6ccec0a828",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d2:b0:c6:c1:04:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-696492",
	                        "af7d620060aa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-696492 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-696492 logs -n 25: (1.498009882s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-676928 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo containerd config dump                                                                                                                                                                                                        │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo crio config                                                                                                                                                                                                                   │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p bridge-676928                                                                                                                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p disable-driver-mounts-178820                                                                                                                                                                                                                     │ disable-driver-mounts-178820 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ start   │ -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-696492 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:58 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-709593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ stop    │ -p old-k8s-version-709593 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:58 UTC │
	│ addons  │ enable metrics-server -p embed-certs-412583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-412583           │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-309734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-309734            │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ stop    │ -p embed-certs-412583 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-412583           │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ stop    │ -p no-preload-309734 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-309734            │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-709593 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ start   │ -p old-k8s-version-709593 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-412583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-412583           │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ start   │ -p embed-certs-412583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-412583           │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-309734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-309734            │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ start   │ -p no-preload-309734 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-309734            │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:58:15
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:58:15.072651  322309 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:58:15.072769  322309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:58:15.072779  322309 out.go:374] Setting ErrFile to fd 2...
	I1123 09:58:15.072783  322309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:58:15.073028  322309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:58:15.073488  322309 out.go:368] Setting JSON to false
	I1123 09:58:15.074642  322309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2434,"bootTime":1763889461,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:58:15.074708  322309 start.go:143] virtualization: kvm guest
	I1123 09:58:15.077222  322309 out.go:179] * [no-preload-309734] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:58:15.078795  322309 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:58:15.078861  322309 notify.go:221] Checking for updates...
	I1123 09:58:15.081612  322309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:58:15.083592  322309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:15.085012  322309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:58:15.086449  322309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:58:15.037472  322139 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:58:15.037519  322139 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 09:58:15.037543  322139 cache.go:65] Caching tarball of preloaded images
	I1123 09:58:15.037602  322139 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:58:15.037626  322139 preload.go:238] Found /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 09:58:15.037815  322139 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:58:15.037968  322139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/config.json ...
	I1123 09:58:15.065607  322139 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:58:15.065630  322139 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:58:15.065651  322139 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:58:15.065688  322139 start.go:360] acquireMachinesLock for embed-certs-412583: {Name:mk2ebf094fb67f9062146f05e50688fe8a83a51f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.065782  322139 start.go:364] duration metric: took 55.77µs to acquireMachinesLock for "embed-certs-412583"
	I1123 09:58:15.065826  322139 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:58:15.065836  322139 fix.go:54] fixHost starting: 
	I1123 09:58:15.066101  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:15.086962  322139 fix.go:112] recreateIfNeeded on embed-certs-412583: state=Stopped err=<nil>
	W1123 09:58:15.086994  322139 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:58:15.088780  322309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:58:15.090713  322309 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:15.091534  322309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:58:15.142488  322309 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:58:15.142608  322309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:58:15.238772  322309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 09:58:15.226367289 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:58:15.238927  322309 docker.go:319] overlay module found
	I1123 09:58:15.241487  322309 out.go:179] * Using the docker driver based on existing profile
	I1123 09:58:15.242969  322309 start.go:309] selected driver: docker
	I1123 09:58:15.242994  322309 start.go:927] validating driver "docker" against &{Name:no-preload-309734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:58:15.243100  322309 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:58:15.243879  322309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:58:15.337610  322309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-23 09:58:15.318695864 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:58:15.337997  322309 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:15.338035  322309 cni.go:84] Creating CNI manager for ""
	I1123 09:58:15.338101  322309 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:58:15.338146  322309 start.go:353] cluster config:
	{Name:no-preload-309734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:58:15.340626  322309 out.go:179] * Starting "no-preload-309734" primary control-plane node in "no-preload-309734" cluster
	I1123 09:58:15.342090  322309 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:58:15.343441  322309 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:58:15.344764  322309 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:58:15.344928  322309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/config.json ...
	I1123 09:58:15.345379  322309 cache.go:107] acquiring lock: {Name:mk112461026d48693cc25788bbfb66278c54f619 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345475  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 09:58:15.345502  322309 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 130.227µs
	I1123 09:58:15.345522  322309 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 09:58:15.345547  322309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:58:15.345665  322309 cache.go:107] acquiring lock: {Name:mkd4fe11e7e40464d53a2ff6b0744dfdf60a0875 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345733  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 09:58:15.345742  322309 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 84.682µs
	I1123 09:58:15.345761  322309 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 09:58:15.345776  322309 cache.go:107] acquiring lock: {Name:mkb1b2704e1a1eae76c0dbc69daffb8fbf8e8b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345794  322309 cache.go:107] acquiring lock: {Name:mkead7f7924767c6c5c6ba37b30d495d696cb12e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345822  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 09:58:15.345829  322309 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 56.357µs
	I1123 09:58:15.345837  322309 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 09:58:15.345853  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 09:58:15.345851  322309 cache.go:107] acquiring lock: {Name:mk79f56807c84f4c041d28aec3cf7394e6568026 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345860  322309 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 81.717µs
	I1123 09:58:15.345869  322309 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 09:58:15.345887  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 09:58:15.345882  322309 cache.go:107] acquiring lock: {Name:mk34f30227131fc2a94276e966f4a2f34086895a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345894  322309 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 46.052µs
	I1123 09:58:15.345902  322309 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 09:58:15.345917  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1123 09:58:15.345924  322309 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 44.701µs
	I1123 09:58:15.345932  322309 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 09:58:15.345946  322309 cache.go:107] acquiring lock: {Name:mk8276b58635a3e009984be7b62fe8a1c1fe3134 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345949  322309 cache.go:107] acquiring lock: {Name:mk5361059e757e1792013f0f7e2d2932441044f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345981  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 09:58:15.345988  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 09:58:15.345988  322309 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 45.104µs
	I1123 09:58:15.345996  322309 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 09:58:15.345996  322309 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 51.215µs
	I1123 09:58:15.346004  322309 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 09:58:15.346011  322309 cache.go:87] Successfully saved all images to host disk.
	I1123 09:58:15.386415  322309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:58:15.386537  322309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:58:15.386584  322309 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:58:15.386664  322309 start.go:360] acquireMachinesLock for no-preload-309734: {Name:mk62afa41d2500936444190e148c873f4b7bcc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.386813  322309 start.go:364] duration metric: took 81.739µs to acquireMachinesLock for "no-preload-309734"
	I1123 09:58:15.386837  322309 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:58:15.386881  322309 fix.go:54] fixHost starting: 
	I1123 09:58:15.388071  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:15.420692  322309 fix.go:112] recreateIfNeeded on no-preload-309734: state=Stopped err=<nil>
	W1123 09:58:15.420757  322309 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:58:12.805486  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:58:12.805555  319511 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:58:12.805681  319511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-709593
	I1123 09:58:12.836860  319511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/old-k8s-version-709593/id_rsa Username:docker}
	I1123 09:58:12.838996  319511 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:12.839024  319511 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:58:12.839088  319511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-709593
	I1123 09:58:12.848801  319511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/old-k8s-version-709593/id_rsa Username:docker}
	I1123 09:58:12.859586  319511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/old-k8s-version-709593/id_rsa Username:docker}
	I1123 09:58:12.873468  319511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/old-k8s-version-709593/id_rsa Username:docker}
	I1123 09:58:12.937177  319511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:12.952126  319511 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-709593" to be "Ready" ...
	I1123 09:58:12.969086  319511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:12.976204  319511 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 09:58:12.976229  319511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 09:58:12.983442  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:58:12.983473  319511 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:58:13.001233  319511 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 09:58:13.001267  319511 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 09:58:13.004404  319511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:13.009276  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:58:13.009301  319511 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:58:13.029767  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:58:13.029801  319511 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:58:13.038403  319511 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:13.038429  319511 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 09:58:13.050997  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:58:13.051025  319511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:58:13.058579  319511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:13.071241  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:58:13.071363  319511 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:58:13.092144  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:58:13.092175  319511 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:58:13.111646  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:58:13.111676  319511 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:58:13.134509  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:58:13.134541  319511 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:58:13.154447  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:13.154473  319511 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:58:13.169068  319511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:15.364147  319511 node_ready.go:49] node "old-k8s-version-709593" is "Ready"
	I1123 09:58:15.364190  319511 node_ready.go:38] duration metric: took 2.412025869s for node "old-k8s-version-709593" to be "Ready" ...
	I1123 09:58:15.364208  319511 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:58:15.364263  319511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1123 09:58:13.031451  311138 node_ready.go:57] node "default-k8s-diff-port-696492" has "Ready":"False" status (will retry)
	W1123 09:58:15.032219  311138 node_ready.go:57] node "default-k8s-diff-port-696492" has "Ready":"False" status (will retry)
	I1123 09:58:16.357752  319511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.353308526s)
	I1123 09:58:16.358184  319511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.389065962s)
	I1123 09:58:16.534484  319511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.475808496s)
	I1123 09:58:16.534610  319511 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-709593"
	I1123 09:58:16.907361  319511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.738234252s)
	I1123 09:58:16.907416  319511 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.54313413s)
	I1123 09:58:16.907568  319511 api_server.go:72] duration metric: took 4.149434047s to wait for apiserver process to appear ...
	I1123 09:58:16.907584  319511 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:58:16.907604  319511 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:58:16.912236  319511 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-709593 addons enable metrics-server
	
	I1123 09:58:16.915026  319511 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:58:16.916617  319511 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1123 09:58:15.088796  322139 out.go:252] * Restarting existing docker container for "embed-certs-412583" ...
	I1123 09:58:15.088879  322139 cli_runner.go:164] Run: docker start embed-certs-412583
	I1123 09:58:15.542380  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:15.569667  322139 kic.go:430] container "embed-certs-412583" state is running.
	I1123 09:58:15.570202  322139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412583
	I1123 09:58:15.603987  322139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/config.json ...
	I1123 09:58:15.604290  322139 machine.go:94] provisionDockerMachine start ...
	I1123 09:58:15.604407  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:15.631755  322139 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:15.632218  322139 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 09:58:15.632283  322139 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:58:15.633493  322139 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51154->127.0.0.1:33118: read: connection reset by peer
	I1123 09:58:18.784189  322139 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412583
	
	I1123 09:58:18.784226  322139 ubuntu.go:182] provisioning hostname "embed-certs-412583"
	I1123 09:58:18.784312  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:18.804215  322139 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:18.804525  322139 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 09:58:18.804542  322139 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-412583 && echo "embed-certs-412583" | sudo tee /etc/hostname
	I1123 09:58:18.963550  322139 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412583
	
	I1123 09:58:18.963630  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:18.985155  322139 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:18.985406  322139 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 09:58:18.985436  322139 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-412583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-412583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-412583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:58:19.136033  322139 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:58:19.136073  322139 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3552/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3552/.minikube}
	I1123 09:58:19.136099  322139 ubuntu.go:190] setting up certificates
	I1123 09:58:19.136127  322139 provision.go:84] configureAuth start
	I1123 09:58:19.136188  322139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412583
	I1123 09:58:19.156884  322139 provision.go:143] copyHostCerts
	I1123 09:58:19.156946  322139 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem, removing ...
	I1123 09:58:19.156960  322139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem
	I1123 09:58:19.157038  322139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem (1123 bytes)
	I1123 09:58:19.157162  322139 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem, removing ...
	I1123 09:58:19.157175  322139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem
	I1123 09:58:19.157204  322139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem (1679 bytes)
	I1123 09:58:19.157275  322139 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem, removing ...
	I1123 09:58:19.157283  322139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem
	I1123 09:58:19.157306  322139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem (1082 bytes)
	I1123 09:58:19.157389  322139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem org=jenkins.embed-certs-412583 san=[127.0.0.1 192.168.103.2 embed-certs-412583 localhost minikube]
	I1123 09:58:19.341356  322139 provision.go:177] copyRemoteCerts
	I1123 09:58:19.341419  322139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:58:19.341455  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.361384  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:19.468486  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:58:19.491156  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:58:19.513792  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:58:19.535858  322139 provision.go:87] duration metric: took 399.716299ms to configureAuth
	I1123 09:58:19.535897  322139 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:58:19.536067  322139 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:19.536082  322139 machine.go:97] duration metric: took 3.93177997s to provisionDockerMachine
	I1123 09:58:19.536090  322139 start.go:293] postStartSetup for "embed-certs-412583" (driver="docker")
	I1123 09:58:19.536098  322139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:58:19.536142  322139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:58:19.536178  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.559284  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:19.665552  322139 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:58:19.669817  322139 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:58:19.669850  322139 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:58:19.669864  322139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/addons for local assets ...
	I1123 09:58:19.669920  322139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/files for local assets ...
	I1123 09:58:19.670030  322139 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem -> 71092.pem in /etc/ssl/certs
	I1123 09:58:19.670160  322139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:58:19.679598  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:58:19.700393  322139 start.go:296] duration metric: took 164.286793ms for postStartSetup
	I1123 09:58:19.700617  322139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:58:19.700679  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.723251  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:15.423567  322309 out.go:252] * Restarting existing docker container for "no-preload-309734" ...
	I1123 09:58:15.423745  322309 cli_runner.go:164] Run: docker start no-preload-309734
	I1123 09:58:15.785997  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:15.812042  322309 kic.go:430] container "no-preload-309734" state is running.
	I1123 09:58:15.812604  322309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-309734
	I1123 09:58:15.840096  322309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/config.json ...
	I1123 09:58:15.840311  322309 machine.go:94] provisionDockerMachine start ...
	I1123 09:58:15.840397  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:15.861975  322309 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:15.862276  322309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:58:15.862298  322309 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:58:15.862938  322309 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40370->127.0.0.1:33123: read: connection reset by peer
	I1123 09:58:19.014724  322309 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-309734
	
	I1123 09:58:19.014760  322309 ubuntu.go:182] provisioning hostname "no-preload-309734"
	I1123 09:58:19.014837  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.035897  322309 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:19.036158  322309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:58:19.036180  322309 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-309734 && echo "no-preload-309734" | sudo tee /etc/hostname
	I1123 09:58:19.197111  322309 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-309734
	
	I1123 09:58:19.197222  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.217292  322309 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:19.217599  322309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:58:19.217634  322309 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-309734' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-309734/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-309734' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:58:19.369774  322309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:58:19.369800  322309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3552/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3552/.minikube}
	I1123 09:58:19.369826  322309 ubuntu.go:190] setting up certificates
	I1123 09:58:19.369838  322309 provision.go:84] configureAuth start
	I1123 09:58:19.369907  322309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-309734
	I1123 09:58:19.390827  322309 provision.go:143] copyHostCerts
	I1123 09:58:19.390891  322309 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem, removing ...
	I1123 09:58:19.390907  322309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem
	I1123 09:58:19.390973  322309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem (1082 bytes)
	I1123 09:58:19.391077  322309 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem, removing ...
	I1123 09:58:19.391092  322309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem
	I1123 09:58:19.391117  322309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem (1123 bytes)
	I1123 09:58:19.391233  322309 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem, removing ...
	I1123 09:58:19.391244  322309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem
	I1123 09:58:19.391264  322309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem (1679 bytes)
	I1123 09:58:19.391312  322309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem org=jenkins.no-preload-309734 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-309734]
	I1123 09:58:19.511909  322309 provision.go:177] copyRemoteCerts
	I1123 09:58:19.511965  322309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:58:19.512011  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.533318  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:19.642831  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:58:19.662615  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:58:19.684205  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:58:19.704592  322309 provision.go:87] duration metric: took 334.741077ms to configureAuth
	I1123 09:58:19.704633  322309 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:58:19.704835  322309 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:19.704853  322309 machine.go:97] duration metric: took 3.864533097s to provisionDockerMachine
	I1123 09:58:19.704864  322309 start.go:293] postStartSetup for "no-preload-309734" (driver="docker")
	I1123 09:58:19.704876  322309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:58:19.704946  322309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:58:19.704998  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.725972  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:19.830882  322309 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:58:19.835302  322309 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:58:19.835376  322309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:58:19.835404  322309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/addons for local assets ...
	I1123 09:58:19.835474  322309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/files for local assets ...
	I1123 09:58:19.835585  322309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem -> 71092.pem in /etc/ssl/certs
	I1123 09:58:19.835733  322309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:58:19.845344  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:58:19.869872  322309 start.go:296] duration metric: took 164.993501ms for postStartSetup
	I1123 09:58:19.869963  322309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:58:19.870010  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.891645  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:19.995281  322309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:58:20.000619  322309 fix.go:56] duration metric: took 4.613765689s for fixHost
	I1123 09:58:20.000652  322309 start.go:83] releasing machines lock for "no-preload-309734", held for 4.613822979s
	I1123 09:58:20.000767  322309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-309734
	I1123 09:58:20.022558  322309 ssh_runner.go:195] Run: cat /version.json
	I1123 09:58:20.022576  322309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:58:20.022624  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:20.022662  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:20.044105  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:20.044757  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:16.916812  319511 api_server.go:141] control plane version: v1.28.0
	I1123 09:58:16.916844  319511 api_server.go:131] duration metric: took 9.252525ms to wait for apiserver health ...
	I1123 09:58:16.916855  319511 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:58:16.918447  319511 addons.go:530] duration metric: took 4.159873845s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1123 09:58:16.922096  319511 system_pods.go:59] 9 kube-system pods found
	I1123 09:58:16.922144  319511 system_pods.go:61] "coredns-5dd5756b68-gf5sx" [9a493920-3739-4eb9-8426-3590a8f2ee51] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:16.922158  319511 system_pods.go:61] "etcd-old-k8s-version-709593" [ae440f4a-2d2c-44c8-9481-9696039f9cea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:16.922169  319511 system_pods.go:61] "kindnet-tpvt2" [fd3daece-c28b-4efa-ae53-16c16790e5be] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:16.922182  319511 system_pods.go:61] "kube-apiserver-old-k8s-version-709593" [e9aebd01-2f2f-4e8e-b3b9-365be3da678e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:16.922197  319511 system_pods.go:61] "kube-controller-manager-old-k8s-version-709593" [35acfac2-d03f-4f28-b69f-0d34ef891c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:16.922209  319511 system_pods.go:61] "kube-proxy-sgv48" [f5d963bd-a2f2-44d2-969c-d219c55aba33] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:58:16.922223  319511 system_pods.go:61] "kube-scheduler-old-k8s-version-709593" [8d265257-a737-4543-b416-8535ffae7725] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:16.922235  319511 system_pods.go:61] "metrics-server-57f55c9bc5-98n6p" [7086738c-57f8-491c-abfa-bfa7c99c5a03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:16.922243  319511 system_pods.go:61] "storage-provisioner" [ba58926e-fdf3-4750-b44d-7c94a027737e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:16.922268  319511 system_pods.go:74] duration metric: took 5.404916ms to wait for pod list to return data ...
	I1123 09:58:16.922278  319511 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:58:16.925487  319511 default_sa.go:45] found service account: "default"
	I1123 09:58:16.925518  319511 default_sa.go:55] duration metric: took 3.233126ms for default service account to be created ...
	I1123 09:58:16.925530  319511 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:58:16.931146  319511 system_pods.go:86] 9 kube-system pods found
	I1123 09:58:16.931197  319511 system_pods.go:89] "coredns-5dd5756b68-gf5sx" [9a493920-3739-4eb9-8426-3590a8f2ee51] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:16.931213  319511 system_pods.go:89] "etcd-old-k8s-version-709593" [ae440f4a-2d2c-44c8-9481-9696039f9cea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:16.931224  319511 system_pods.go:89] "kindnet-tpvt2" [fd3daece-c28b-4efa-ae53-16c16790e5be] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:16.931234  319511 system_pods.go:89] "kube-apiserver-old-k8s-version-709593" [e9aebd01-2f2f-4e8e-b3b9-365be3da678e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:16.931247  319511 system_pods.go:89] "kube-controller-manager-old-k8s-version-709593" [35acfac2-d03f-4f28-b69f-0d34ef891c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:16.931261  319511 system_pods.go:89] "kube-proxy-sgv48" [f5d963bd-a2f2-44d2-969c-d219c55aba33] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:58:16.931269  319511 system_pods.go:89] "kube-scheduler-old-k8s-version-709593" [8d265257-a737-4543-b416-8535ffae7725] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:16.931280  319511 system_pods.go:89] "metrics-server-57f55c9bc5-98n6p" [7086738c-57f8-491c-abfa-bfa7c99c5a03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:16.931288  319511 system_pods.go:89] "storage-provisioner" [ba58926e-fdf3-4750-b44d-7c94a027737e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:16.931302  319511 system_pods.go:126] duration metric: took 5.763498ms to wait for k8s-apps to be running ...
	I1123 09:58:16.931317  319511 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:58:16.931414  319511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:58:16.947858  319511 system_svc.go:56] duration metric: took 16.533152ms WaitForService to wait for kubelet
	I1123 09:58:16.947892  319511 kubeadm.go:587] duration metric: took 4.189759298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:16.947917  319511 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:58:16.950929  319511 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:58:16.950953  319511 node_conditions.go:123] node cpu capacity is 8
	I1123 09:58:16.950968  319511 node_conditions.go:105] duration metric: took 3.045706ms to run NodePressure ...
	I1123 09:58:16.950978  319511 start.go:242] waiting for startup goroutines ...
	I1123 09:58:16.950985  319511 start.go:247] waiting for cluster config update ...
	I1123 09:58:16.950995  319511 start.go:256] writing updated cluster config ...
	I1123 09:58:16.951224  319511 ssh_runner.go:195] Run: rm -f paused
	I1123 09:58:16.956007  319511 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:16.960673  319511 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gf5sx" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:58:18.967689  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	I1123 09:58:19.826235  322139 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:58:19.831841  322139 fix.go:56] duration metric: took 4.765999719s for fixHost
	I1123 09:58:19.831872  322139 start.go:83] releasing machines lock for "embed-certs-412583", held for 4.766074158s
	I1123 09:58:19.831944  322139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412583
	I1123 09:58:19.853394  322139 ssh_runner.go:195] Run: cat /version.json
	I1123 09:58:19.853416  322139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:58:19.853450  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.853513  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.876679  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:19.876891  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:20.038564  322139 ssh_runner.go:195] Run: systemctl --version
	I1123 09:58:20.047319  322139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:58:20.052518  322139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:58:20.052594  322139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:58:20.061644  322139 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:58:20.061671  322139 start.go:496] detecting cgroup driver to use...
	I1123 09:58:20.061718  322139 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:58:20.061779  322139 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 09:58:20.081435  322139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 09:58:20.097344  322139 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:58:20.097421  322139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:58:20.113725  322139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:58:20.128400  322139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:58:20.231648  322139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:58:20.316032  322139 docker.go:234] disabling docker service ...
	I1123 09:58:20.316100  322139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:58:20.331383  322139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:58:20.347697  322139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:58:20.472315  322139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:58:20.579927  322139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:58:20.596227  322139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:58:20.625764  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 09:58:20.637317  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 09:58:20.647853  322139 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 09:58:20.647915  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 09:58:20.658746  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:58:20.669170  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 09:58:20.679447  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:58:20.689943  322139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:58:20.699586  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 09:58:20.714077  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 09:58:20.725403  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 09:58:20.736101  322139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:58:20.744980  322139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:58:20.755319  322139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:20.869025  322139 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 09:58:21.025087  322139 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 09:58:21.025164  322139 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 09:58:21.030199  322139 start.go:564] Will wait 60s for crictl version
	I1123 09:58:21.030278  322139 ssh_runner.go:195] Run: which crictl
	I1123 09:58:21.035718  322139 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:58:21.067308  322139 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 09:58:21.067444  322139 ssh_runner.go:195] Run: containerd --version
	I1123 09:58:21.090291  322139 ssh_runner.go:195] Run: containerd --version
	I1123 09:58:21.116354  322139 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1123 09:58:17.530438  311138 node_ready.go:57] node "default-k8s-diff-port-696492" has "Ready":"False" status (will retry)
	W1123 09:58:19.531594  311138 node_ready.go:57] node "default-k8s-diff-port-696492" has "Ready":"False" status (will retry)
	I1123 09:58:20.531620  311138 node_ready.go:49] node "default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:20.531690  311138 node_ready.go:38] duration metric: took 11.50429796s for node "default-k8s-diff-port-696492" to be "Ready" ...
	I1123 09:58:20.531711  311138 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:58:20.531779  311138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:58:20.548914  311138 api_server.go:72] duration metric: took 11.829659475s to wait for apiserver process to appear ...
	I1123 09:58:20.548948  311138 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:58:20.548973  311138 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 09:58:20.556266  311138 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 09:58:20.557639  311138 api_server.go:141] control plane version: v1.34.1
	I1123 09:58:20.557673  311138 api_server.go:131] duration metric: took 8.71495ms to wait for apiserver health ...
	I1123 09:58:20.557685  311138 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:58:20.563372  311138 system_pods.go:59] 8 kube-system pods found
	I1123 09:58:20.563584  311138 system_pods.go:61] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:20.563630  311138 system_pods.go:61] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:20.563642  311138 system_pods.go:61] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:20.563658  311138 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:20.563666  311138 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:20.563673  311138 system_pods.go:61] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:20.563680  311138 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:20.563699  311138 system_pods.go:61] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:20.563708  311138 system_pods.go:74] duration metric: took 6.015429ms to wait for pod list to return data ...
	I1123 09:58:20.563720  311138 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:58:20.570703  311138 default_sa.go:45] found service account: "default"
	I1123 09:58:20.570736  311138 default_sa.go:55] duration metric: took 7.009974ms for default service account to be created ...
	I1123 09:58:20.570746  311138 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:58:20.575207  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:20.575242  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:20.575249  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:20.575255  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:20.575259  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:20.575263  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:20.575266  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:20.575270  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:20.575274  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:20.575296  311138 retry.go:31] will retry after 192.26313ms: missing components: kube-dns
	I1123 09:58:20.775706  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:20.775755  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:20.775763  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:20.775771  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:20.775777  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:20.775783  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:20.775789  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:20.775794  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:20.775801  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:20.775821  311138 retry.go:31] will retry after 254.648665ms: missing components: kube-dns
	I1123 09:58:21.035635  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:21.035673  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:21.035679  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:21.035686  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:21.035689  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:21.035694  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:21.035697  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:21.035703  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:21.035708  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:21.035722  311138 retry.go:31] will retry after 331.46599ms: missing components: kube-dns
	I1123 09:58:20.222065  322309 ssh_runner.go:195] Run: systemctl --version
	I1123 09:58:20.228990  322309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:58:20.234556  322309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:58:20.234627  322309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:58:20.243545  322309 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:58:20.243573  322309 start.go:496] detecting cgroup driver to use...
	I1123 09:58:20.243611  322309 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:58:20.243660  322309 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 09:58:20.264548  322309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 09:58:20.280079  322309 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:58:20.280150  322309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:58:20.296816  322309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:58:20.310745  322309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:58:20.413004  322309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:58:20.527067  322309 docker.go:234] disabling docker service ...
	I1123 09:58:20.527157  322309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:58:20.546148  322309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:58:20.566688  322309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:58:20.669747  322309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:58:20.762925  322309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:58:20.781366  322309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:58:20.809420  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 09:58:20.821115  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 09:58:20.833147  322309 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 09:58:20.833216  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 09:58:20.844744  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:58:20.855660  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 09:58:20.869004  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:58:20.881745  322309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:58:20.893684  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 09:58:20.908080  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 09:58:20.921743  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 09:58:20.934863  322309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:58:20.946434  322309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:58:20.957671  322309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:21.068818  322309 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 09:58:21.183026  322309 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 09:58:21.183124  322309 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 09:58:21.188002  322309 start.go:564] Will wait 60s for crictl version
	I1123 09:58:21.188177  322309 ssh_runner.go:195] Run: which crictl
	I1123 09:58:21.192973  322309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:58:21.221014  322309 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 09:58:21.221084  322309 ssh_runner.go:195] Run: containerd --version
	I1123 09:58:21.246163  322309 ssh_runner.go:195] Run: containerd --version
	I1123 09:58:21.272718  322309 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 09:58:21.117867  322139 cli_runner.go:164] Run: docker network inspect embed-certs-412583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:58:21.142651  322139 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 09:58:21.147763  322139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:58:21.162167  322139 kubeadm.go:884] updating cluster {Name:embed-certs-412583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:58:21.162356  322139 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:58:21.162432  322139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:58:21.193588  322139 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:58:21.193608  322139 containerd.go:534] Images already preloaded, skipping extraction
	I1123 09:58:21.193664  322139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:58:21.220984  322139 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:58:21.221009  322139 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:58:21.221020  322139 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1123 09:58:21.221142  322139 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-412583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:58:21.221200  322139 ssh_runner.go:195] Run: sudo crictl info
	I1123 09:58:21.253087  322139 cni.go:84] Creating CNI manager for ""
	I1123 09:58:21.253125  322139 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:58:21.253161  322139 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:58:21.253198  322139 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-412583 NodeName:embed-certs-412583 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:58:21.253456  322139 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-412583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:58:21.253546  322139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:58:21.264720  322139 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:58:21.264808  322139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:58:21.274656  322139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1123 09:58:21.290412  322139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:58:21.306023  322139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1123 09:58:21.320803  322139 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:58:21.325391  322139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:58:21.339594  322139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:21.433463  322139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:21.462263  322139 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583 for IP: 192.168.103.2
	I1123 09:58:21.462293  322139 certs.go:195] generating shared ca certs ...
	I1123 09:58:21.462313  322139 certs.go:227] acquiring lock for ca certs: {Name:mkf0ec2efb8866dd9406da39e0a5f5dc931fd377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:21.462496  322139 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key
	I1123 09:58:21.462555  322139 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key
	I1123 09:58:21.462571  322139 certs.go:257] generating profile certs ...
	I1123 09:58:21.462693  322139 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/client.key
	I1123 09:58:21.462760  322139 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/apiserver.key.2b18ab85
	I1123 09:58:21.462855  322139 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/proxy-client.key
	I1123 09:58:21.463004  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem (1338 bytes)
	W1123 09:58:21.463065  322139 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109_empty.pem, impossibly tiny 0 bytes
	I1123 09:58:21.463079  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:58:21.463130  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:58:21.463175  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:58:21.463211  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem (1679 bytes)
	I1123 09:58:21.463273  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:58:21.463971  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:58:21.488159  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:58:21.518581  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:58:21.541410  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:58:21.573302  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 09:58:21.605581  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:58:21.635805  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:58:21.661426  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:58:21.683895  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem --> /usr/share/ca-certificates/7109.pem (1338 bytes)
	I1123 09:58:21.714410  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /usr/share/ca-certificates/71092.pem (1708 bytes)
	I1123 09:58:21.743544  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:58:21.767759  322139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:58:21.782417  322139 ssh_runner.go:195] Run: openssl version
	I1123 09:58:21.789653  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71092.pem && ln -fs /usr/share/ca-certificates/71092.pem /etc/ssl/certs/71092.pem"
	I1123 09:58:21.800679  322139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.805240  322139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:26 /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.805305  322139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.853974  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71092.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:58:21.864801  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:58:21.874638  322139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.879072  322139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.879154  322139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.918122  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:58:21.927321  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7109.pem && ln -fs /usr/share/ca-certificates/7109.pem /etc/ssl/certs/7109.pem"
	I1123 09:58:21.937415  322139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7109.pem
	I1123 09:58:21.941575  322139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:26 /usr/share/ca-certificates/7109.pem
	I1123 09:58:21.941637  322139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7109.pem
	I1123 09:58:21.981304  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7109.pem /etc/ssl/certs/51391683.0"
	I1123 09:58:21.989854  322139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:58:21.994318  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:58:22.046395  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:58:22.109666  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:58:22.189653  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:58:22.263782  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:58:22.341044  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:58:22.411923  322139 kubeadm.go:401] StartCluster: {Name:embed-certs-412583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:58:22.412038  322139 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 09:58:22.412095  322139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:58:22.522668  322139 cri.go:89] found id: "04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1"
	I1123 09:58:22.522702  322139 cri.go:89] found id: "307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac"
	I1123 09:58:22.522708  322139 cri.go:89] found id: "7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79"
	I1123 09:58:22.522712  322139 cri.go:89] found id: "02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54"
	I1123 09:58:22.522716  322139 cri.go:89] found id: "db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4"
	I1123 09:58:22.522721  322139 cri.go:89] found id: "01f6da8fb3f7dfb36a0d1bf7ac34fa2c7715a85d4db29e51e680371cf976de98"
	I1123 09:58:22.522724  322139 cri.go:89] found id: "de43573b10ccd2db93907531b927156400b38e1ccc072df4694f86271eadb2a7"
	I1123 09:58:22.522728  322139 cri.go:89] found id: "c59b716fcc34de4cd73575b55a3765828129eb26a8da3f4e32971f259a35d5b9"
	I1123 09:58:22.522732  322139 cri.go:89] found id: "ea002215dc5ff9de708bfb501c13731db3b837342413eaa850d2bdaa9db3326b"
	I1123 09:58:22.522741  322139 cri.go:89] found id: "786d0436a85fd77d6e60804d917a286d3d71195fdb79aff7ac861499ed514dbf"
	I1123 09:58:22.522746  322139 cri.go:89] found id: "72aa47eb89fbb59da47429e762a23f4e68077fe27b50deb7d4860da7370e5f9b"
	I1123 09:58:22.522750  322139 cri.go:89] found id: "0275433c40df693012ccd198e9424273105899b21f0e3e75bc2219ef022bdec2"
	I1123 09:58:22.522754  322139 cri.go:89] found id: ""
	I1123 09:58:22.522808  322139 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 09:58:22.575489  322139 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54","pid":915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54/rootfs","created":"2025-11-23T09:58:22.307218143Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6342b63ea1fe8850287e5288573654a5"},"owner":"root"},{"ociVersion":"1.2.1","id":"04d113f6abe1bb9e310df54f359895f1d3038255f25e995
d19aed64e023780a1","pid":988,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1/rootfs","created":"2025-11-23T09:58:22.471116692Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7","io.kubernetes.cri.sandbox-name":"kube-scheduler-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"48b1774c2a81341d1b596102d3c6374b"},"owner":"root"},{"ociVersion":"1.2.1","id":"307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac","pid":965,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/
k8s.io/307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac/rootfs","created":"2025-11-23T09:58:22.428732839Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4910cba9d7ad0b0fc7314f9642a97b8c"},"owner":"root"},{"ociVersion":"1.2.1","id":"4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038","pid":859,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038","rootfs":"/run/containerd/io.co
ntainerd.runtime.v2.task/k8s.io/4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038/rootfs","created":"2025-11-23T09:58:22.146071319Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-412583_a16dd0b1b9cc0f64fa36d85cacd3aa9f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a16dd0b1b9cc0f64fa36d85cacd3aa9f"},"owner":"root"},{"ociVersion":"1.2.1","id":"48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57","pid":850,"status
":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57/rootfs","created":"2025-11-23T09:58:22.130629369Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-412583_4910cba9d7ad0b0fc7314f9642a97b8c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4910cba9
d7ad0b0fc7314f9642a97b8c"},"owner":"root"},{"ociVersion":"1.2.1","id":"7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79","pid":936,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79/rootfs","created":"2025-11-23T09:58:22.383028386Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a16dd0b1b9cc0f64fa36d85cacd3aa9f"},"owner":"root"},{"ociVersion":"1.2.1","id":"87353c
65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c","pid":767,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c/rootfs","created":"2025-11-23T09:58:22.094263435Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-412583_6342b63ea1fe8850287e5288573654a5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-412583","io.kubernetes.cri.sandbox-namespace
":"kube-system","io.kubernetes.cri.sandbox-uid":"6342b63ea1fe8850287e5288573654a5"},"owner":"root"},{"ociVersion":"1.2.1","id":"d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7","pid":867,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7/rootfs","created":"2025-11-23T09:58:22.152374682Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-412583_48b1774c2a81341d1b596102
d3c6374b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"48b1774c2a81341d1b596102d3c6374b"},"owner":"root"}]
	I1123 09:58:22.575707  322139 cri.go:126] list returned 8 containers
	I1123 09:58:22.575725  322139 cri.go:129] container: {ID:02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54 Status:running}
	I1123 09:58:22.575759  322139 cri.go:135] skipping {02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54 running}: state = "running", want "paused"
	I1123 09:58:22.575771  322139 cri.go:129] container: {ID:04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1 Status:created}
	I1123 09:58:22.575779  322139 cri.go:135] skipping {04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1 created}: state = "created", want "paused"
	I1123 09:58:22.575791  322139 cri.go:129] container: {ID:307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac Status:running}
	I1123 09:58:22.575797  322139 cri.go:135] skipping {307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac running}: state = "running", want "paused"
	I1123 09:58:22.575803  322139 cri.go:129] container: {ID:4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038 Status:running}
	I1123 09:58:22.575819  322139 cri.go:131] skipping 4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038 - not in ps
	I1123 09:58:22.575824  322139 cri.go:129] container: {ID:48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57 Status:running}
	I1123 09:58:22.575842  322139 cri.go:131] skipping 48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57 - not in ps
	I1123 09:58:22.575847  322139 cri.go:129] container: {ID:7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79 Status:running}
	I1123 09:58:22.575860  322139 cri.go:135] skipping {7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79 running}: state = "running", want "paused"
	I1123 09:58:22.575876  322139 cri.go:129] container: {ID:87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c Status:running}
	I1123 09:58:22.575889  322139 cri.go:131] skipping 87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c - not in ps
	I1123 09:58:22.575895  322139 cri.go:129] container: {ID:d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7 Status:running}
	I1123 09:58:22.575902  322139 cri.go:131] skipping d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7 - not in ps
	I1123 09:58:22.575953  322139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:58:22.611879  322139 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:58:22.611902  322139 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:58:22.611954  322139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:58:22.642294  322139 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:58:22.643276  322139 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-412583" does not appear in /home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:22.644366  322139 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-3552/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-412583" cluster setting kubeconfig missing "embed-certs-412583" context setting]
	I1123 09:58:22.645824  322139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/kubeconfig: {Name:mka3871857a2712d9b8d0b57e593926fb298dec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.649056  322139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:58:22.674727  322139 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1123 09:58:22.674893  322139 kubeadm.go:602] duration metric: took 62.98229ms to restartPrimaryControlPlane
	I1123 09:58:22.674941  322139 kubeadm.go:403] duration metric: took 263.030265ms to StartCluster
	I1123 09:58:22.675020  322139 settings.go:142] acquiring lock: {Name:mkf22dae3e46f0832bb83531ab4e1d4bfda0dd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.675204  322139 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:22.677946  322139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/kubeconfig: {Name:mka3871857a2712d9b8d0b57e593926fb298dec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.678263  322139 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:58:22.678628  322139 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:22.678642  322139 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:58:22.679078  322139 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-412583"
	I1123 09:58:22.679096  322139 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-412583"
	I1123 09:58:22.679094  322139 addons.go:70] Setting default-storageclass=true in profile "embed-certs-412583"
	I1123 09:58:22.679109  322139 addons.go:70] Setting metrics-server=true in profile "embed-certs-412583"
	I1123 09:58:22.679121  322139 addons.go:239] Setting addon metrics-server=true in "embed-certs-412583"
	W1123 09:58:22.679126  322139 addons.go:248] addon metrics-server should already be in state true
	I1123 09:58:22.679129  322139 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-412583"
	I1123 09:58:22.679161  322139 host.go:66] Checking if "embed-certs-412583" exists ...
	I1123 09:58:22.679575  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.679663  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.679827  322139 addons.go:70] Setting dashboard=true in profile "embed-certs-412583"
	I1123 09:58:22.679848  322139 addons.go:239] Setting addon dashboard=true in "embed-certs-412583"
	W1123 09:58:22.679856  322139 addons.go:248] addon dashboard should already be in state true
	I1123 09:58:22.679887  322139 host.go:66] Checking if "embed-certs-412583" exists ...
	W1123 09:58:22.679104  322139 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:58:22.679985  322139 host.go:66] Checking if "embed-certs-412583" exists ...
	I1123 09:58:22.680425  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.680471  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.685805  322139 out.go:179] * Verifying Kubernetes components...
	I1123 09:58:22.689947  322139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:22.725041  322139 addons.go:239] Setting addon default-storageclass=true in "embed-certs-412583"
	W1123 09:58:22.725078  322139 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:58:22.725107  322139 host.go:66] Checking if "embed-certs-412583" exists ...
	I1123 09:58:22.725611  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.740887  322139 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 09:58:22.740940  322139 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:58:22.740998  322139 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 09:58:22.742416  322139 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:22.742442  322139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:58:22.742508  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:22.744008  322139 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 09:58:22.744037  322139 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 09:58:22.744119  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:22.744851  322139 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:58:21.274495  322309 cli_runner.go:164] Run: docker network inspect no-preload-309734 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:58:21.294049  322309 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 09:58:21.298367  322309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:58:21.310746  322309 kubeadm.go:884] updating cluster {Name:no-preload-309734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:58:21.310857  322309 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:58:21.310899  322309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:58:21.341106  322309 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:58:21.341133  322309 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:58:21.341149  322309 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1123 09:58:21.341280  322309 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-309734 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:58:21.341360  322309 ssh_runner.go:195] Run: sudo crictl info
	I1123 09:58:21.375033  322309 cni.go:84] Creating CNI manager for ""
	I1123 09:58:21.375065  322309 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:58:21.375080  322309 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:58:21.375106  322309 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-309734 NodeName:no-preload-309734 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:58:21.375251  322309 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-309734"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:58:21.375322  322309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:58:21.387913  322309 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:58:21.387991  322309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:58:21.397681  322309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 09:58:21.413039  322309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:58:21.427808  322309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1123 09:58:21.443045  322309 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:58:21.447908  322309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:58:21.460621  322309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:21.571138  322309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:21.600090  322309 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734 for IP: 192.168.94.2
	I1123 09:58:21.600121  322309 certs.go:195] generating shared ca certs ...
	I1123 09:58:21.600144  322309 certs.go:227] acquiring lock for ca certs: {Name:mkf0ec2efb8866dd9406da39e0a5f5dc931fd377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:21.600287  322309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key
	I1123 09:58:21.600394  322309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key
	I1123 09:58:21.600411  322309 certs.go:257] generating profile certs ...
	I1123 09:58:21.600533  322309 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/client.key
	I1123 09:58:21.600609  322309 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/apiserver.key.e5f9e7ec
	I1123 09:58:21.600680  322309 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/proxy-client.key
	I1123 09:58:21.600837  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem (1338 bytes)
	W1123 09:58:21.600886  322309 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109_empty.pem, impossibly tiny 0 bytes
	I1123 09:58:21.600905  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:58:21.600944  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:58:21.600985  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:58:21.601024  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem (1679 bytes)
	I1123 09:58:21.601090  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:58:21.602145  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:58:21.634430  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:58:21.659812  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:58:21.682412  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:58:21.714212  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:58:21.741669  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:58:21.765516  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:58:21.786309  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:58:21.809102  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /usr/share/ca-certificates/71092.pem (1708 bytes)
	I1123 09:58:21.833628  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:58:21.856900  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem --> /usr/share/ca-certificates/7109.pem (1338 bytes)
	I1123 09:58:21.878576  322309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:58:21.893401  322309 ssh_runner.go:195] Run: openssl version
	I1123 09:58:21.900728  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71092.pem && ln -fs /usr/share/ca-certificates/71092.pem /etc/ssl/certs/71092.pem"
	I1123 09:58:21.910813  322309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.915460  322309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:26 /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.915523  322309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.954651  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71092.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:58:21.964837  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:58:21.975213  322309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.979550  322309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.979624  322309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:22.031316  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:58:22.042472  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7109.pem && ln -fs /usr/share/ca-certificates/7109.pem /etc/ssl/certs/7109.pem"
	I1123 09:58:22.053567  322309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7109.pem
	I1123 09:58:22.060156  322309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:26 /usr/share/ca-certificates/7109.pem
	I1123 09:58:22.060228  322309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7109.pem
	I1123 09:58:22.123945  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7109.pem /etc/ssl/certs/51391683.0"
	I1123 09:58:22.141229  322309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:58:22.149212  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:58:22.215773  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:58:22.307006  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:58:22.409864  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:58:22.488008  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:58:22.547183  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:58:22.632317  322309 kubeadm.go:401] StartCluster: {Name:no-preload-309734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:58:22.632463  322309 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 09:58:22.632537  322309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:58:22.820225  322309 cri.go:89] found id: "7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0"
	I1123 09:58:22.820247  322309 cri.go:89] found id: "b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484"
	I1123 09:58:22.820254  322309 cri.go:89] found id: "528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75"
	I1123 09:58:22.820259  322309 cri.go:89] found id: "aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426"
	I1123 09:58:22.820274  322309 cri.go:89] found id: "6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7"
	I1123 09:58:22.820279  322309 cri.go:89] found id: "103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8"
	I1123 09:58:22.820283  322309 cri.go:89] found id: "5c49f9103fd4c18deec14e3758e958db34380a181d3ea11344ed107acc94faab"
	I1123 09:58:22.820287  322309 cri.go:89] found id: "b1f2f40f833522a80b40c076eb2228ca8ab64af5ae29ec412679554033ccf342"
	I1123 09:58:22.820291  322309 cri.go:89] found id: "d13615209a18dd7b287968a7f98989bb3ce87db942b906988e39fde11c294cce"
	I1123 09:58:22.820302  322309 cri.go:89] found id: "b7a0f8d20ac463989e63a3565c249816e2e20c9067287e9f2b8c3db6cfb05aab"
	I1123 09:58:22.820306  322309 cri.go:89] found id: "d3705422907a474de42f4b2ba1fea7490c10e3083855a79fad006ba545fab905"
	I1123 09:58:22.820311  322309 cri.go:89] found id: "a81288f6ae55b6a042b8f67e3e9eedfe1c61dd371e39e06133e14aee6f968eb3"
	I1123 09:58:22.820315  322309 cri.go:89] found id: ""
	I1123 09:58:22.820377  322309 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 09:58:22.897989  322309 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5","pid":859,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5/rootfs","created":"2025-11-23T09:58:22.314641016Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-309734_d1a9f5b1e4228d8308c268e4cff72a2a","io.kubernetes.cri.sand
box-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d1a9f5b1e4228d8308c268e4cff72a2a"},"owner":"root"},{"ociVersion":"1.2.1","id":"528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75","pid":956,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75/rootfs","created":"2025-11-23T09:58:22.573043592Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-309734","io.kubernet
es.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"621c440e8d9733cd5781b23a5d2d5f0f"},"owner":"root"},{"ociVersion":"1.2.1","id":"6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda/rootfs","created":"2025-11-23T09:58:22.304911081Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-309734_2
f0e3c5c71b122518e8f9d36a37eecf6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2f0e3c5c71b122518e8f9d36a37eecf6"},"owner":"root"},{"ociVersion":"1.2.1","id":"7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0","pid":973,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0/rootfs","created":"2025-11-23T09:58:22.577398869Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda","io.kubernetes.cri.sandbox-name":
"kube-scheduler-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2f0e3c5c71b122518e8f9d36a37eecf6"},"owner":"root"},{"ociVersion":"1.2.1","id":"aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426","pid":916,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426/rootfs","created":"2025-11-23T09:58:22.496529275Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77","io.kubernetes.cri.sandbox-name":"etcd-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0a0dd6d88a52ba9
00ac99a4488161e2b"},"owner":"root"},{"ociVersion":"1.2.1","id":"b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484","pid":958,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484/rootfs","created":"2025-11-23T09:58:22.564906359Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d1a9f5b1e4228d8308c268e4cff72a2a"},"owner":"root"},{"ociVersion":"1.2.1","id":"da35d734fa90bf
64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c","pid":845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c/rootfs","created":"2025-11-23T09:58:22.300157615Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-309734_621c440e8d9733cd5781b23a5d2d5f0f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-309734","io.kubernetes.cri.sandbox
-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"621c440e8d9733cd5781b23a5d2d5f0f"},"owner":"root"},{"ociVersion":"1.2.1","id":"e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77","pid":798,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77/rootfs","created":"2025-11-23T09:58:22.263111405Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-309734_0a0dd6d88a52ba900ac99a448
8161e2b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0a0dd6d88a52ba900ac99a4488161e2b"},"owner":"root"}]
	I1123 09:58:22.898184  322309 cri.go:126] list returned 8 containers
	I1123 09:58:22.898197  322309 cri.go:129] container: {ID:0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5 Status:running}
	I1123 09:58:22.898227  322309 cri.go:131] skipping 0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5 - not in ps
	I1123 09:58:22.898234  322309 cri.go:129] container: {ID:528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75 Status:running}
	I1123 09:58:22.898244  322309 cri.go:135] skipping {528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75 running}: state = "running", want "paused"
	I1123 09:58:22.898255  322309 cri.go:129] container: {ID:6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda Status:running}
	I1123 09:58:22.898268  322309 cri.go:131] skipping 6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda - not in ps
	I1123 09:58:22.898273  322309 cri.go:129] container: {ID:7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0 Status:running}
	I1123 09:58:22.898280  322309 cri.go:135] skipping {7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0 running}: state = "running", want "paused"
	I1123 09:58:22.898286  322309 cri.go:129] container: {ID:aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426 Status:running}
	I1123 09:58:22.898292  322309 cri.go:135] skipping {aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426 running}: state = "running", want "paused"
	I1123 09:58:22.898299  322309 cri.go:129] container: {ID:b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484 Status:running}
	I1123 09:58:22.898306  322309 cri.go:135] skipping {b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484 running}: state = "running", want "paused"
	I1123 09:58:22.898312  322309 cri.go:129] container: {ID:da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c Status:running}
	I1123 09:58:22.898320  322309 cri.go:131] skipping da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c - not in ps
	I1123 09:58:22.898325  322309 cri.go:129] container: {ID:e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77 Status:running}
	I1123 09:58:22.898341  322309 cri.go:131] skipping e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77 - not in ps
	I1123 09:58:22.898392  322309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:58:22.910936  322309 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:58:22.910956  322309 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:58:22.911008  322309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:58:22.926982  322309 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:58:22.928309  322309 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-309734" does not appear in /home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:22.929354  322309 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-3552/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-309734" cluster setting kubeconfig missing "no-preload-309734" context setting]
	I1123 09:58:22.931598  322309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/kubeconfig: {Name:mka3871857a2712d9b8d0b57e593926fb298dec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.933983  322309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:58:22.957568  322309 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1123 09:58:22.957607  322309 kubeadm.go:602] duration metric: took 46.644326ms to restartPrimaryControlPlane
	I1123 09:58:22.957618  322309 kubeadm.go:403] duration metric: took 325.308863ms to StartCluster
	I1123 09:58:22.957641  322309 settings.go:142] acquiring lock: {Name:mkf22dae3e46f0832bb83531ab4e1d4bfda0dd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.957705  322309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:22.960240  322309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/kubeconfig: {Name:mka3871857a2712d9b8d0b57e593926fb298dec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.960737  322309 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:58:22.961000  322309 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:22.960842  322309 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:58:22.961080  322309 addons.go:70] Setting dashboard=true in profile "no-preload-309734"
	I1123 09:58:22.961088  322309 addons.go:70] Setting storage-provisioner=true in profile "no-preload-309734"
	I1123 09:58:22.961102  322309 addons.go:239] Setting addon dashboard=true in "no-preload-309734"
	I1123 09:58:22.961106  322309 addons.go:239] Setting addon storage-provisioner=true in "no-preload-309734"
	W1123 09:58:22.961111  322309 addons.go:248] addon dashboard should already be in state true
	W1123 09:58:22.961115  322309 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:58:22.961148  322309 host.go:66] Checking if "no-preload-309734" exists ...
	I1123 09:58:22.961153  322309 addons.go:70] Setting default-storageclass=true in profile "no-preload-309734"
	I1123 09:58:22.961188  322309 addons.go:70] Setting metrics-server=true in profile "no-preload-309734"
	I1123 09:58:22.961204  322309 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-309734"
	I1123 09:58:22.961212  322309 addons.go:239] Setting addon metrics-server=true in "no-preload-309734"
	W1123 09:58:22.961220  322309 addons.go:248] addon metrics-server should already be in state true
	I1123 09:58:22.961242  322309 host.go:66] Checking if "no-preload-309734" exists ...
	I1123 09:58:22.961148  322309 host.go:66] Checking if "no-preload-309734" exists ...
	I1123 09:58:22.961551  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:22.961668  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:22.961922  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:22.962571  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:22.963365  322309 out.go:179] * Verifying Kubernetes components...
	I1123 09:58:22.967624  322309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:23.000169  322309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:58:23.002026  322309 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:23.002116  322309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:58:23.002210  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:23.024527  322309 addons.go:239] Setting addon default-storageclass=true in "no-preload-309734"
	W1123 09:58:23.024576  322309 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:58:23.024797  322309 host.go:66] Checking if "no-preload-309734" exists ...
	I1123 09:58:23.026816  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:23.033752  322309 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 09:58:23.036761  322309 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:58:23.038842  322309 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 09:58:23.039035  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:58:23.039539  322309 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:58:23.039699  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:21.373166  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:21.373204  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:21.373212  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:21.373220  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:21.373225  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:21.373231  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:21.373235  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:21.373241  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:21.373248  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:21.373267  311138 retry.go:31] will retry after 416.506633ms: missing components: kube-dns
	I1123 09:58:21.794744  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:21.794770  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Running
	I1123 09:58:21.794776  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:21.794781  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:21.794787  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:21.794793  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:21.794796  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:21.794800  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:21.794803  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Running
	I1123 09:58:21.794810  311138 system_pods.go:126] duration metric: took 1.224058938s to wait for k8s-apps to be running ...
	I1123 09:58:21.794819  311138 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:58:21.794860  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:58:21.810163  311138 system_svc.go:56] duration metric: took 15.335302ms WaitForService to wait for kubelet
	I1123 09:58:21.810195  311138 kubeadm.go:587] duration metric: took 13.090944371s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:21.810216  311138 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:58:21.813663  311138 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:58:21.813696  311138 node_conditions.go:123] node cpu capacity is 8
	I1123 09:58:21.813730  311138 node_conditions.go:105] duration metric: took 3.507443ms to run NodePressure ...
	I1123 09:58:21.813758  311138 start.go:242] waiting for startup goroutines ...
	I1123 09:58:21.813771  311138 start.go:247] waiting for cluster config update ...
	I1123 09:58:21.813790  311138 start.go:256] writing updated cluster config ...
	I1123 09:58:21.814128  311138 ssh_runner.go:195] Run: rm -f paused
	I1123 09:58:21.818537  311138 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:21.822899  311138 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-49wlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.828869  311138 pod_ready.go:94] pod "coredns-66bc5c9577-49wlg" is "Ready"
	I1123 09:58:21.828907  311138 pod_ready.go:86] duration metric: took 5.975283ms for pod "coredns-66bc5c9577-49wlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.831672  311138 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.836558  311138 pod_ready.go:94] pod "etcd-default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:21.836589  311138 pod_ready.go:86] duration metric: took 4.88699ms for pod "etcd-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.839055  311138 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.843948  311138 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:21.843979  311138 pod_ready.go:86] duration metric: took 4.896647ms for pod "kube-apiserver-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.846732  311138 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:22.223828  311138 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:22.223861  311138 pod_ready.go:86] duration metric: took 377.100636ms for pod "kube-controller-manager-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:22.425032  311138 pod_ready.go:83] waiting for pod "kube-proxy-q6wsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:22.826560  311138 pod_ready.go:94] pod "kube-proxy-q6wsc" is "Ready"
	I1123 09:58:22.826589  311138 pod_ready.go:86] duration metric: took 401.523413ms for pod "kube-proxy-q6wsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:23.029997  311138 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:23.424854  311138 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:23.424899  311138 pod_ready.go:86] duration metric: took 394.877866ms for pod "kube-scheduler-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:23.424916  311138 pod_ready.go:40] duration metric: took 1.606342126s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:23.509609  311138 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:58:23.513916  311138 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-696492" cluster and "default" namespace by default
	I1123 09:58:22.746366  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:58:22.746394  322139 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:58:22.746461  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:22.767664  322139 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:22.767689  322139 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:58:22.767749  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:22.788700  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:22.792428  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:22.792696  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:22.826177  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:23.053750  322139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:23.107654  322139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:23.123973  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:58:23.124005  322139 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:58:23.126775  322139 node_ready.go:35] waiting up to 6m0s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:58:23.187851  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:58:23.187973  322139 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:58:23.217922  322139 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 09:58:23.218007  322139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 09:58:23.305984  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:58:23.306071  322139 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:58:23.307748  322139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:23.324529  322139 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 09:58:23.324565  322139 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 09:58:23.340572  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:58:23.340605  322139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:58:23.415883  322139 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:23.415916  322139 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 09:58:23.425591  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:58:23.425615  322139 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:58:23.474772  322139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:23.511681  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:58:23.511713  322139 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:58:23.599552  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:58:23.599584  322139 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:58:23.663716  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:58:23.663870  322139 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:58:23.704503  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:23.704609  322139 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:58:23.787106  322139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:23.042109  322309 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 09:58:23.042193  322309 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 09:58:23.042297  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:23.073469  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:23.088819  322309 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:23.088845  322309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:58:23.089358  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:23.101695  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:23.104450  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:23.134905  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:23.362213  322309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:23.382428  322309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:23.405607  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:58:23.405632  322309 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:58:23.418817  322309 node_ready.go:35] waiting up to 6m0s for node "no-preload-309734" to be "Ready" ...
	I1123 09:58:23.455580  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:58:23.455700  322309 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:58:23.477618  322309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 09:58:23.477639  322309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 09:58:23.518678  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:58:23.518707  322309 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:58:23.547427  322309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:23.587428  322309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 09:58:23.587479  322309 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 09:58:23.594823  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:58:23.594851  322309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:58:23.645814  322309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:23.645840  322309 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 09:58:23.649129  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:58:23.649206  322309 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:58:23.713186  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:58:23.713284  322309 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:58:23.743261  322309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:23.783546  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:58:23.783826  322309 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:58:23.933891  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:58:23.933976  322309 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:58:23.963869  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:23.963895  322309 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:58:24.009293  322309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 09:58:20.970982  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	W1123 09:58:22.978532  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	W1123 09:58:25.470653  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	I1123 09:58:25.403342  322309 node_ready.go:49] node "no-preload-309734" is "Ready"
	I1123 09:58:25.403380  322309 node_ready.go:38] duration metric: took 1.984510855s for node "no-preload-309734" to be "Ready" ...
	I1123 09:58:25.403397  322309 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:58:25.403459  322309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:58:26.666727  322309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.284256705s)
	I1123 09:58:26.666807  322309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.119349474s)
	I1123 09:58:26.957105  322309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.213785139s)
	I1123 09:58:26.957147  322309 addons.go:495] Verifying addon metrics-server=true in "no-preload-309734"
	I1123 09:58:26.999989  322309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.990639389s)
	I1123 09:58:27.000614  322309 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.597077609s)
	I1123 09:58:27.000657  322309 api_server.go:72] duration metric: took 4.039883277s to wait for apiserver process to appear ...
	I1123 09:58:27.000673  322309 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:58:27.000695  322309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:58:27.004257  322309 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-309734 addons enable metrics-server
	
	I1123 09:58:27.008720  322309 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1123 09:58:25.303722  322139 node_ready.go:49] node "embed-certs-412583" is "Ready"
	I1123 09:58:25.303762  322139 node_ready.go:38] duration metric: took 2.176945691s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:58:25.303846  322139 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:58:25.303947  322139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:58:27.038119  322139 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.730262779s)
	I1123 09:58:27.038194  322139 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.563347534s)
	I1123 09:58:27.038215  322139 addons.go:495] Verifying addon metrics-server=true in "embed-certs-412583"
	I1123 09:58:27.038425  322139 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.251268911s)
	I1123 09:58:27.038452  322139 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.734490994s)
	I1123 09:58:27.038474  322139 api_server.go:72] duration metric: took 4.360181863s to wait for apiserver process to appear ...
	I1123 09:58:27.038495  322139 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:58:27.038512  322139 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:58:27.038986  322139 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.931239095s)
	I1123 09:58:27.040662  322139 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-412583 addons enable metrics-server
	
	I1123 09:58:27.049441  322139 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:58:27.051186  322139 api_server.go:141] control plane version: v1.34.1
	I1123 09:58:27.051287  322139 api_server.go:131] duration metric: took 12.782895ms to wait for apiserver health ...
	I1123 09:58:27.051322  322139 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:58:27.057840  322139 system_pods.go:59] 9 kube-system pods found
	I1123 09:58:27.057894  322139 system_pods.go:61] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:27.057908  322139 system_pods.go:61] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:27.057919  322139 system_pods.go:61] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:27.057940  322139 system_pods.go:61] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:27.057947  322139 system_pods.go:61] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:27.057951  322139 system_pods.go:61] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:58:27.057957  322139 system_pods.go:61] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:27.057962  322139 system_pods.go:61] "metrics-server-746fcd58dc-5bq5f" [856d4db7-3788-41a2-98d4-e61a5d997e43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:27.057975  322139 system_pods.go:61] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:27.057988  322139 system_pods.go:74] duration metric: took 6.449125ms to wait for pod list to return data ...
	I1123 09:58:27.058002  322139 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:58:27.061637  322139 default_sa.go:45] found service account: "default"
	I1123 09:58:27.061669  322139 default_sa.go:55] duration metric: took 3.65968ms for default service account to be created ...
	I1123 09:58:27.061681  322139 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:58:27.062869  322139 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I1123 09:58:27.064609  322139 addons.go:530] duration metric: took 4.385954428s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I1123 09:58:27.066570  322139 system_pods.go:86] 9 kube-system pods found
	I1123 09:58:27.066606  322139 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:27.066621  322139 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:27.066629  322139 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:27.066643  322139 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:27.066649  322139 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:27.066653  322139 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:58:27.066658  322139 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:27.066662  322139 system_pods.go:89] "metrics-server-746fcd58dc-5bq5f" [856d4db7-3788-41a2-98d4-e61a5d997e43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:27.066667  322139 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:27.066674  322139 system_pods.go:126] duration metric: took 4.987876ms to wait for k8s-apps to be running ...
	I1123 09:58:27.066682  322139 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:58:27.066728  322139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:58:27.084505  322139 system_svc.go:56] duration metric: took 17.815139ms WaitForService to wait for kubelet
	I1123 09:58:27.084533  322139 kubeadm.go:587] duration metric: took 4.406240193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:27.084548  322139 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:58:27.088257  322139 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:58:27.088292  322139 node_conditions.go:123] node cpu capacity is 8
	I1123 09:58:27.088309  322139 node_conditions.go:105] duration metric: took 3.756078ms to run NodePressure ...
	I1123 09:58:27.088325  322139 start.go:242] waiting for startup goroutines ...
	I1123 09:58:27.088345  322139 start.go:247] waiting for cluster config update ...
	I1123 09:58:27.088359  322139 start.go:256] writing updated cluster config ...
	I1123 09:58:27.088712  322139 ssh_runner.go:195] Run: rm -f paused
	I1123 09:58:27.093478  322139 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:27.098111  322139 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:58:29.104647  322139 pod_ready.go:104] pod "coredns-66bc5c9577-8dgc7" is not "Ready", error: <nil>
	I1123 09:58:27.010224  322309 addons.go:530] duration metric: took 4.049374736s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1123 09:58:27.015771  322309 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:58:27.015817  322309 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:58:27.501093  322309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:58:27.506121  322309 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:58:27.506153  322309 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:58:28.001522  322309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:58:28.006128  322309 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:58:28.008093  322309 api_server.go:141] control plane version: v1.34.1
	I1123 09:58:28.008128  322309 api_server.go:131] duration metric: took 1.007447817s to wait for apiserver health ...
	I1123 09:58:28.008140  322309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:58:28.012632  322309 system_pods.go:59] 9 kube-system pods found
	I1123 09:58:28.012693  322309 system_pods.go:61] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:28.012707  322309 system_pods.go:61] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:28.012732  322309 system_pods.go:61] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:28.012741  322309 system_pods.go:61] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:28.012753  322309 system_pods.go:61] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:28.012760  322309 system_pods.go:61] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:58:28.012765  322309 system_pods.go:61] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:28.012773  322309 system_pods.go:61] "metrics-server-746fcd58dc-gtpxg" [91f7dd1b-5d54-4720-9cd3-bd846b219cd8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:28.012782  322309 system_pods.go:61] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:28.012789  322309 system_pods.go:74] duration metric: took 4.643282ms to wait for pod list to return data ...
	I1123 09:58:28.012799  322309 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:58:28.015740  322309 default_sa.go:45] found service account: "default"
	I1123 09:58:28.015766  322309 default_sa.go:55] duration metric: took 2.958976ms for default service account to be created ...
	I1123 09:58:28.015776  322309 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:58:28.019218  322309 system_pods.go:86] 9 kube-system pods found
	I1123 09:58:28.019258  322309 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:28.019271  322309 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:28.019282  322309 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:28.019294  322309 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:28.019302  322309 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:28.019311  322309 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:58:28.019322  322309 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:28.019386  322309 system_pods.go:89] "metrics-server-746fcd58dc-gtpxg" [91f7dd1b-5d54-4720-9cd3-bd846b219cd8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:28.019404  322309 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:28.019414  322309 system_pods.go:126] duration metric: took 3.631818ms to wait for k8s-apps to be running ...
	I1123 09:58:28.019427  322309 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:58:28.019480  322309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:58:28.039272  322309 system_svc.go:56] duration metric: took 19.836608ms WaitForService to wait for kubelet
	I1123 09:58:28.039305  322309 kubeadm.go:587] duration metric: took 5.078530615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:28.039348  322309 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:58:28.042824  322309 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:58:28.042860  322309 node_conditions.go:123] node cpu capacity is 8
	I1123 09:58:28.042880  322309 node_conditions.go:105] duration metric: took 3.526093ms to run NodePressure ...
	I1123 09:58:28.042895  322309 start.go:242] waiting for startup goroutines ...
	I1123 09:58:28.042906  322309 start.go:247] waiting for cluster config update ...
	I1123 09:58:28.042926  322309 start.go:256] writing updated cluster config ...
	I1123 09:58:28.043236  322309 ssh_runner.go:195] Run: rm -f paused
	I1123 09:58:28.048448  322309 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:28.054721  322309 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:58:30.061547  322309 pod_ready.go:104] pod "coredns-66bc5c9577-sx25q" is not "Ready", error: <nil>
	W1123 09:58:27.966936  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	W1123 09:58:29.967457  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	8447438246f63       56cc512116c8f       7 seconds ago       Running             busybox                   0                   e97d1ab2108e1       busybox                                                default
	f45b6674fee79       52546a367cc9e       13 seconds ago      Running             coredns                   0                   478a15b3e8809       coredns-66bc5c9577-49wlg                               kube-system
	88f6eeddc1856       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   d28e7710f13fc       storage-provisioner                                    kube-system
	02522085d67a4       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   9ce22c41aa99c       kindnet-kx2hw                                          kube-system
	62dd8f139861d       fc25172553d79       24 seconds ago      Running             kube-proxy                0                   ff78308b78ac3       kube-proxy-q6wsc                                       kube-system
	c4ba281063cb0       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   adb1246cb4b28       kube-controller-manager-default-k8s-diff-port-696492   kube-system
	842222ab6c244       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   6fda8451f90ff       etcd-default-k8s-diff-port-696492                      kube-system
	52012eaf34144       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   a2ac0fa566c5d       kube-scheduler-default-k8s-diff-port-696492            kube-system
	260483ba1a152       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   33d780512464d       kube-apiserver-default-k8s-diff-port-696492            kube-system
	
	
	==> containerd <==
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.886977891Z" level=info msg="StartContainer for \"88f6eeddc18564a70f0c3c28d32fa11b88032e467a4769be8046cf8d399a116d\""
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.888256509Z" level=info msg="connecting to shim 88f6eeddc18564a70f0c3c28d32fa11b88032e467a4769be8046cf8d399a116d" address="unix:///run/containerd/s/21dc298e388a58283d9f7e9de3c335cc8020cd3253d7f00adc02472438f35f28" protocol=ttrpc version=3
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.891241530Z" level=info msg="CreateContainer within sandbox \"478a15b3e8809d0d0cde5ecc7b3ca9f7a11f14627d862d9f3680782ea53ee42d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.900248677Z" level=info msg="Container f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.908587231Z" level=info msg="CreateContainer within sandbox \"478a15b3e8809d0d0cde5ecc7b3ca9f7a11f14627d862d9f3680782ea53ee42d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594\""
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.909593531Z" level=info msg="StartContainer for \"f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594\""
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.910709483Z" level=info msg="connecting to shim f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594" address="unix:///run/containerd/s/fd72551db93b76b30e6e5e6c56cf734dfc4bebb23af37fa9336a8c2893ca7a72" protocol=ttrpc version=3
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.941158498Z" level=info msg="StartContainer for \"88f6eeddc18564a70f0c3c28d32fa11b88032e467a4769be8046cf8d399a116d\" returns successfully"
	Nov 23 09:58:21 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:21.005600388Z" level=info msg="StartContainer for \"f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594\" returns successfully"
	Nov 23 09:58:24 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:24.143871334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330,Namespace:default,Attempt:0,}"
	Nov 23 09:58:24 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:24.202138909Z" level=info msg="connecting to shim e97d1ab2108e111925782798fb153a04508c1c587e92529beff66f3b24b7ef46" address="unix:///run/containerd/s/e1ef4feccc734ee6546949826a9ecabc0b203d14a3193efa1a36e4f1523566c3" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:58:24 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:24.319382624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330,Namespace:default,Attempt:0,} returns sandbox id \"e97d1ab2108e111925782798fb153a04508c1c587e92529beff66f3b24b7ef46\""
	Nov 23 09:58:24 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:24.325375359Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.513056507Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.514040258Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.515907363Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.518717877Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.519316855Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.193868575s"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.519382474Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.527056005Z" level=info msg="CreateContainer within sandbox \"e97d1ab2108e111925782798fb153a04508c1c587e92529beff66f3b24b7ef46\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.546635344Z" level=info msg="Container 8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.558968009Z" level=info msg="CreateContainer within sandbox \"e97d1ab2108e111925782798fb153a04508c1c587e92529beff66f3b24b7ef46\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03\""
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.560905545Z" level=info msg="StartContainer for \"8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03\""
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.565647935Z" level=info msg="connecting to shim 8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03" address="unix:///run/containerd/s/e1ef4feccc734ee6546949826a9ecabc0b203d14a3193efa1a36e4f1523566c3" protocol=ttrpc version=3
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.690801107Z" level=info msg="StartContainer for \"8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03\" returns successfully"
	
	
	==> coredns [f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60147 - 47991 "HINFO IN 7168823184494500575.1194822797604877992. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033141887s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-696492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-696492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-696492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_58_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:58:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-696492
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:58:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:58:33 +0000   Sun, 23 Nov 2025 09:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:58:33 +0000   Sun, 23 Nov 2025 09:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:58:33 +0000   Sun, 23 Nov 2025 09:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:58:33 +0000   Sun, 23 Nov 2025 09:58:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-696492
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c6439931-9547-4eff-a445-4b28dd7aea61
	  Boot ID:                    e4c4d39b-bebd-4037-9237-26b945dbe084
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-49wlg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-696492                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-kx2hw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-696492             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-696492    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-q6wsc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-696492             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 37s)  kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-696492 event: Registered Node default-k8s-diff-port-696492 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-696492 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.288463] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[Nov23 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.195562] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +5.912917] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 c0 1c 98 33 a9 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.002091] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 47 bd bf 96 57 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[  +4.460318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 85 b9 91 f8 a4 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +2.904694] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	[Nov23 09:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 48 bf 8b d1 fc 08 06
	[  +0.000931] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	
	
	==> etcd [842222ab6c244214fb7ee6baeb300cef7642a0363f771b03d1a504ac99132070] <==
	{"level":"warn","ts":"2025-11-23T09:57:59.618227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.638239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.648857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.662594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.668479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.678088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.686705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.693977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.702587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.712188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.721797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.732823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.741907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.751410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.760428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.769703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.778978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.788950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.796949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.806234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.816850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.828714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.837858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.848154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.929442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50668","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:58:34 up 40 min,  0 user,  load average: 6.24, 4.55, 2.83
	Linux default-k8s-diff-port-696492 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [02522085d67a410254267ee219e6627961454b738df21c14c684ae238c0fe4b6] <==
	I1123 09:58:10.059807       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:58:10.060103       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:58:10.060363       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:58:10.060390       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:58:10.060422       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:58:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:58:10.358743       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:58:10.358912       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:58:10.358935       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:58:10.359166       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:58:10.839305       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:58:10.839358       1 metrics.go:72] Registering metrics
	I1123 09:58:10.839452       1 controller.go:711] "Syncing nftables rules"
	I1123 09:58:20.360635       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:58:20.360682       1 main.go:301] handling current node
	I1123 09:58:30.360250       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:58:30.360310       1 main.go:301] handling current node
	
	
	==> kube-apiserver [260483ba1a1523f842d7822582fa2c0eccb179009df5831d6ae999dcb45e74d0] <==
	I1123 09:58:00.611450       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:58:00.611457       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:58:00.611464       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:58:00.614872       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:58:00.620675       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:58:00.635605       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:58:00.657631       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:58:01.517017       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:58:01.522440       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:58:01.522467       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:58:02.419506       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:58:02.477910       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:58:02.574517       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:58:02.627077       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:58:02.646696       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 09:58:02.650246       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:58:02.657854       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:58:03.654632       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:58:03.670885       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:58:03.686658       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:58:07.728773       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:58:07.736964       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:58:08.326282       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 09:58:08.525001       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 09:58:32.974009       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:49718: use of closed network connection
	
	
	==> kube-controller-manager [c4ba281063cb08c4a19749761d1dafbb99802bd3aa3a7a50087abdb2e15455fd] <==
	I1123 09:58:07.533842       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 09:58:07.536236       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-696492" podCIDRs=["10.244.0.0/24"]
	I1123 09:58:07.551416       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:58:07.563875       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:58:07.572132       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:58:07.572173       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:58:07.572257       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:58:07.572261       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:58:07.572274       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:58:07.572717       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:58:07.572831       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:58:07.572905       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:58:07.573049       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:58:07.573954       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:58:07.574044       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:58:07.574804       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:58:07.574955       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:58:07.578104       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:58:07.579306       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:58:07.579326       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:58:07.580555       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:58:07.587320       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:58:07.593522       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:58:07.593612       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:58:22.524214       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [62dd8f139861d152370867f5755d14af5c5c3ef214c0e4c570ca082f5a3b25d7] <==
	I1123 09:58:09.567863       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:58:09.636549       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:58:09.736714       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:58:09.736757       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:58:09.736888       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:58:09.768239       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:58:09.768353       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:58:09.775207       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:58:09.775865       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:58:09.775907       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:58:09.777697       1 config.go:309] "Starting node config controller"
	I1123 09:58:09.777770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:58:09.777780       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:58:09.777998       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:58:09.778012       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:58:09.778021       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:58:09.778392       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:58:09.778918       1 config.go:200] "Starting service config controller"
	I1123 09:58:09.778940       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:58:09.879226       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:58:09.879236       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:58:09.880040       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [52012eaf341449ecd532cfe1abc80dc23366de525e1fd5c3c7cb1f9af315c852] <==
	E1123 09:58:00.585445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:58:00.585484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:58:00.585511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:58:00.585606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:58:00.585652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:58:00.585874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:58:01.455228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:58:01.480919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:58:01.503650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:58:01.542689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:58:01.662847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:58:01.752041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:58:01.785844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:58:01.824612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:58:01.829682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:58:01.844562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:58:01.857188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:58:01.861059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:58:01.870728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:58:01.877522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:58:01.890844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:58:01.891558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:58:01.937859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:58:01.994498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1123 09:58:03.479527       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412516    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45rc\" (UniqueName: \"kubernetes.io/projected/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-kube-api-access-l45rc\") pod \"kindnet-kx2hw\" (UID: \"1c3d2821-8e77-421a-8ccc-8d3d76d1380d\") " pod="kube-system/kindnet-kx2hw"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412560    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-kube-proxy\") pod \"kube-proxy-q6wsc\" (UID: \"ad2f26f5-ff1d-4acf-bea5-8ad34dc37130\") " pod="kube-system/kube-proxy-q6wsc"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412576    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2c2z\" (UniqueName: \"kubernetes.io/projected/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-kube-api-access-c2c2z\") pod \"kube-proxy-q6wsc\" (UID: \"ad2f26f5-ff1d-4acf-bea5-8ad34dc37130\") " pod="kube-system/kube-proxy-q6wsc"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412597    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-cni-cfg\") pod \"kindnet-kx2hw\" (UID: \"1c3d2821-8e77-421a-8ccc-8d3d76d1380d\") " pod="kube-system/kindnet-kx2hw"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412615    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-xtables-lock\") pod \"kindnet-kx2hw\" (UID: \"1c3d2821-8e77-421a-8ccc-8d3d76d1380d\") " pod="kube-system/kindnet-kx2hw"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412633    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-lib-modules\") pod \"kindnet-kx2hw\" (UID: \"1c3d2821-8e77-421a-8ccc-8d3d76d1380d\") " pod="kube-system/kindnet-kx2hw"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412737    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-lib-modules\") pod \"kube-proxy-q6wsc\" (UID: \"ad2f26f5-ff1d-4acf-bea5-8ad34dc37130\") " pod="kube-system/kube-proxy-q6wsc"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522177    1459 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522231    1459 projected.go:196] Error preparing data for projected volume kube-api-access-c2c2z for pod kube-system/kube-proxy-q6wsc: configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522183    1459 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522316    1459 projected.go:196] Error preparing data for projected volume kube-api-access-l45rc for pod kube-system/kindnet-kx2hw: configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522386    1459 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-kube-api-access-c2c2z podName:ad2f26f5-ff1d-4acf-bea5-8ad34dc37130 nodeName:}" failed. No retries permitted until 2025-11-23 09:58:09.022312027 +0000 UTC m=+5.615544873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c2c2z" (UniqueName: "kubernetes.io/projected/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-kube-api-access-c2c2z") pod "kube-proxy-q6wsc" (UID: "ad2f26f5-ff1d-4acf-bea5-8ad34dc37130") : configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522420    1459 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-kube-api-access-l45rc podName:1c3d2821-8e77-421a-8ccc-8d3d76d1380d nodeName:}" failed. No retries permitted until 2025-11-23 09:58:09.022396574 +0000 UTC m=+5.615629419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l45rc" (UniqueName: "kubernetes.io/projected/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-kube-api-access-l45rc") pod "kindnet-kx2hw" (UID: "1c3d2821-8e77-421a-8ccc-8d3d76d1380d") : configmap "kube-root-ca.crt" not found
	Nov 23 09:58:10 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:10.552412    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q6wsc" podStartSLOduration=2.552388347 podStartE2EDuration="2.552388347s" podCreationTimestamp="2025-11-23 09:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:58:10.552255516 +0000 UTC m=+7.145488365" watchObservedRunningTime="2025-11-23 09:58:10.552388347 +0000 UTC m=+7.145621252"
	Nov 23 09:58:10 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:10.729543    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kx2hw" podStartSLOduration=2.729516565 podStartE2EDuration="2.729516565s" podCreationTimestamp="2025-11-23 09:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:58:10.584486608 +0000 UTC m=+7.177719456" watchObservedRunningTime="2025-11-23 09:58:10.729516565 +0000 UTC m=+7.322749413"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.391108    1459 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.506719    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4786\" (UniqueName: \"kubernetes.io/projected/967d1f43-a5b7-4bf8-8111-c014f4b7594f-kube-api-access-r4786\") pod \"coredns-66bc5c9577-49wlg\" (UID: \"967d1f43-a5b7-4bf8-8111-c014f4b7594f\") " pod="kube-system/coredns-66bc5c9577-49wlg"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.506792    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7rd\" (UniqueName: \"kubernetes.io/projected/bbfe2e2e-e519-43f0-8575-91a152db45bf-kube-api-access-bc7rd\") pod \"storage-provisioner\" (UID: \"bbfe2e2e-e519-43f0-8575-91a152db45bf\") " pod="kube-system/storage-provisioner"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.506858    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/967d1f43-a5b7-4bf8-8111-c014f4b7594f-config-volume\") pod \"coredns-66bc5c9577-49wlg\" (UID: \"967d1f43-a5b7-4bf8-8111-c014f4b7594f\") " pod="kube-system/coredns-66bc5c9577-49wlg"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.506886    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbfe2e2e-e519-43f0-8575-91a152db45bf-tmp\") pod \"storage-provisioner\" (UID: \"bbfe2e2e-e519-43f0-8575-91a152db45bf\") " pod="kube-system/storage-provisioner"
	Nov 23 09:58:21 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:21.590940    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-49wlg" podStartSLOduration=13.590915197 podStartE2EDuration="13.590915197s" podCreationTimestamp="2025-11-23 09:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:58:21.590082134 +0000 UTC m=+18.183314984" watchObservedRunningTime="2025-11-23 09:58:21.590915197 +0000 UTC m=+18.184148045"
	Nov 23 09:58:21 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:21.627669    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.627626835000001 podStartE2EDuration="12.627626835s" podCreationTimestamp="2025-11-23 09:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:58:21.609573127 +0000 UTC m=+18.202805976" watchObservedRunningTime="2025-11-23 09:58:21.627626835 +0000 UTC m=+18.220859682"
	Nov 23 09:58:23 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:23.931886    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7lj4\" (UniqueName: \"kubernetes.io/projected/e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330-kube-api-access-j7lj4\") pod \"busybox\" (UID: \"e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330\") " pod="default/busybox"
	Nov 23 09:58:27 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:27.640068    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.442575199 podStartE2EDuration="4.640045849s" podCreationTimestamp="2025-11-23 09:58:23 +0000 UTC" firstStartedPulling="2025-11-23 09:58:24.323602215 +0000 UTC m=+20.916835047" lastFinishedPulling="2025-11-23 09:58:26.521072867 +0000 UTC m=+23.114305697" observedRunningTime="2025-11-23 09:58:27.63967862 +0000 UTC m=+24.232911489" watchObservedRunningTime="2025-11-23 09:58:27.640045849 +0000 UTC m=+24.233278696"
	Nov 23 09:58:32 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:32.973717    1459 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.85.2:41480->192.168.85.2:10010: write tcp 192.168.85.2:41480->192.168.85.2:10010: write: broken pipe
	
	
	==> storage-provisioner [88f6eeddc18564a70f0c3c28d32fa11b88032e467a4769be8046cf8d399a116d] <==
	I1123 09:58:20.953574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:58:20.967250       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:58:20.967723       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:58:20.973297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:20.984499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:58:20.984770       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:58:20.985206       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b60e9482-d678-4958-8cff-3ab7d57cc846", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-696492_a77c9ea0-60d8-4e87-a0f2-4b293fa6d6a5 became leader
	I1123 09:58:20.985655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-696492_a77c9ea0-60d8-4e87-a0f2-4b293fa6d6a5!
	W1123 09:58:20.992574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:21.007436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:58:21.086621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-696492_a77c9ea0-60d8-4e87-a0f2-4b293fa6d6a5!
	W1123 09:58:23.012785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:23.019724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:25.024580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:25.030065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:27.034275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:27.042707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:29.047910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:29.052847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:31.057045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:31.063439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:33.068658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:33.077146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-696492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-696492
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-696492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32",
	        "Created": "2025-11-23T09:57:46.827229115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312188,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:57:46.872164848Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32/hostname",
	        "HostsPath": "/var/lib/docker/containers/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32/hosts",
	        "LogPath": "/var/lib/docker/containers/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32/af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32-json.log",
	        "Name": "/default-k8s-diff-port-696492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-696492:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-696492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "af7d620060aac474095e35eedc7a91843249d7d678679fccbca19b8585d1ce32",
	                "LowerDir": "/var/lib/docker/overlay2/3bd5d98036ec5cf749b85e9a5093210a965ee5843659df77fdd16ca6b0178a73-init/diff:/var/lib/docker/overlay2/c80a0dfdb81b7753b0a82e2bc6458805cbbad0a9ce5819c63e1d9b7b71ba226c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3bd5d98036ec5cf749b85e9a5093210a965ee5843659df77fdd16ca6b0178a73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3bd5d98036ec5cf749b85e9a5093210a965ee5843659df77fdd16ca6b0178a73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3bd5d98036ec5cf749b85e9a5093210a965ee5843659df77fdd16ca6b0178a73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-696492",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-696492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-696492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-696492",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-696492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c139c5c1061e3186dbf9016bce9aa974edaaef31339f75c4bd78d5704691bbfd",
	            "SandboxKey": "/var/run/docker/netns/c139c5c1061e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-696492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ca961fd9658a4dcdf2dc766f9a71dcbc96f2bd9acb1a01fb0e9f54d16847232",
	                    "EndpointID": "524f07a9a39cab86a5af3cc9a2b50c1fcde9e4f2792e290296190f6ccec0a828",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d2:b0:c6:c1:04:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-696492",
	                        "af7d620060aa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-696492 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-696492 logs -n 25: (1.294081642s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-676928 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo containerd config dump                                                                                                                                                                                                        │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p bridge-676928 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ ssh     │ -p bridge-676928 sudo crio config                                                                                                                                                                                                                   │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p bridge-676928                                                                                                                                                                                                                                    │ bridge-676928                │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ delete  │ -p disable-driver-mounts-178820                                                                                                                                                                                                                     │ disable-driver-mounts-178820 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ start   │ -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-696492 │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:58 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-709593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:57 UTC │
	│ stop    │ -p old-k8s-version-709593 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:57 UTC │ 23 Nov 25 09:58 UTC │
	│ addons  │ enable metrics-server -p embed-certs-412583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-412583           │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-309734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-309734            │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ stop    │ -p embed-certs-412583 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-412583           │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ stop    │ -p no-preload-309734 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-309734            │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-709593 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ start   │ -p old-k8s-version-709593 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-709593       │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-412583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-412583           │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ start   │ -p embed-certs-412583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-412583           │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-309734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-309734            │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │ 23 Nov 25 09:58 UTC │
	│ start   │ -p no-preload-309734 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-309734            │ jenkins │ v1.37.0 │ 23 Nov 25 09:58 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:58:15
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:58:15.072651  322309 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:58:15.072769  322309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:58:15.072779  322309 out.go:374] Setting ErrFile to fd 2...
	I1123 09:58:15.072783  322309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:58:15.073028  322309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:58:15.073488  322309 out.go:368] Setting JSON to false
	I1123 09:58:15.074642  322309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2434,"bootTime":1763889461,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:58:15.074708  322309 start.go:143] virtualization: kvm guest
	I1123 09:58:15.077222  322309 out.go:179] * [no-preload-309734] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:58:15.078795  322309 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:58:15.078861  322309 notify.go:221] Checking for updates...
	I1123 09:58:15.081612  322309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:58:15.083592  322309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:15.085012  322309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:58:15.086449  322309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:58:15.037472  322139 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:58:15.037519  322139 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 09:58:15.037543  322139 cache.go:65] Caching tarball of preloaded images
	I1123 09:58:15.037602  322139 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:58:15.037626  322139 preload.go:238] Found /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 09:58:15.037815  322139 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:58:15.037968  322139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/config.json ...
	I1123 09:58:15.065607  322139 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:58:15.065630  322139 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:58:15.065651  322139 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:58:15.065688  322139 start.go:360] acquireMachinesLock for embed-certs-412583: {Name:mk2ebf094fb67f9062146f05e50688fe8a83a51f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.065782  322139 start.go:364] duration metric: took 55.77µs to acquireMachinesLock for "embed-certs-412583"
	I1123 09:58:15.065826  322139 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:58:15.065836  322139 fix.go:54] fixHost starting: 
	I1123 09:58:15.066101  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:15.086962  322139 fix.go:112] recreateIfNeeded on embed-certs-412583: state=Stopped err=<nil>
	W1123 09:58:15.086994  322139 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:58:15.088780  322309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:58:15.090713  322309 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:15.091534  322309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:58:15.142488  322309 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:58:15.142608  322309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:58:15.238772  322309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 09:58:15.226367289 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:58:15.238927  322309 docker.go:319] overlay module found
	I1123 09:58:15.241487  322309 out.go:179] * Using the docker driver based on existing profile
	I1123 09:58:15.242969  322309 start.go:309] selected driver: docker
	I1123 09:58:15.242994  322309 start.go:927] validating driver "docker" against &{Name:no-preload-309734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:58:15.243100  322309 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:58:15.243879  322309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:58:15.337610  322309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-23 09:58:15.318695864 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:58:15.337997  322309 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:15.338035  322309 cni.go:84] Creating CNI manager for ""
	I1123 09:58:15.338101  322309 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:58:15.338146  322309 start.go:353] cluster config:
	{Name:no-preload-309734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:58:15.340626  322309 out.go:179] * Starting "no-preload-309734" primary control-plane node in "no-preload-309734" cluster
	I1123 09:58:15.342090  322309 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:58:15.343441  322309 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:58:15.344764  322309 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:58:15.344928  322309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/config.json ...
	I1123 09:58:15.345379  322309 cache.go:107] acquiring lock: {Name:mk112461026d48693cc25788bbfb66278c54f619 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345475  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 09:58:15.345502  322309 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 130.227µs
	I1123 09:58:15.345522  322309 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 09:58:15.345547  322309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:58:15.345665  322309 cache.go:107] acquiring lock: {Name:mkd4fe11e7e40464d53a2ff6b0744dfdf60a0875 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345733  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 09:58:15.345742  322309 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 84.682µs
	I1123 09:58:15.345761  322309 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 09:58:15.345776  322309 cache.go:107] acquiring lock: {Name:mkb1b2704e1a1eae76c0dbc69daffb8fbf8e8b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345794  322309 cache.go:107] acquiring lock: {Name:mkead7f7924767c6c5c6ba37b30d495d696cb12e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345822  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 09:58:15.345829  322309 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 56.357µs
	I1123 09:58:15.345837  322309 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 09:58:15.345853  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 09:58:15.345851  322309 cache.go:107] acquiring lock: {Name:mk79f56807c84f4c041d28aec3cf7394e6568026 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345860  322309 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 81.717µs
	I1123 09:58:15.345869  322309 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 09:58:15.345887  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 09:58:15.345882  322309 cache.go:107] acquiring lock: {Name:mk34f30227131fc2a94276e966f4a2f34086895a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345894  322309 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 46.052µs
	I1123 09:58:15.345902  322309 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 09:58:15.345917  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1123 09:58:15.345924  322309 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 44.701µs
	I1123 09:58:15.345932  322309 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 09:58:15.345946  322309 cache.go:107] acquiring lock: {Name:mk8276b58635a3e009984be7b62fe8a1c1fe3134 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345949  322309 cache.go:107] acquiring lock: {Name:mk5361059e757e1792013f0f7e2d2932441044f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.345981  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 09:58:15.345988  322309 cache.go:115] /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 09:58:15.345988  322309 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 45.104µs
	I1123 09:58:15.345996  322309 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 09:58:15.345996  322309 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 51.215µs
	I1123 09:58:15.346004  322309 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 09:58:15.346011  322309 cache.go:87] Successfully saved all images to host disk.
	I1123 09:58:15.386415  322309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:58:15.386537  322309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:58:15.386584  322309 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:58:15.386664  322309 start.go:360] acquireMachinesLock for no-preload-309734: {Name:mk62afa41d2500936444190e148c873f4b7bcc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:58:15.386813  322309 start.go:364] duration metric: took 81.739µs to acquireMachinesLock for "no-preload-309734"
	I1123 09:58:15.386837  322309 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:58:15.386881  322309 fix.go:54] fixHost starting: 
	I1123 09:58:15.388071  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:15.420692  322309 fix.go:112] recreateIfNeeded on no-preload-309734: state=Stopped err=<nil>
	W1123 09:58:15.420757  322309 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:58:12.805486  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:58:12.805555  319511 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:58:12.805681  319511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-709593
	I1123 09:58:12.836860  319511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/old-k8s-version-709593/id_rsa Username:docker}
	I1123 09:58:12.838996  319511 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:12.839024  319511 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:58:12.839088  319511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-709593
	I1123 09:58:12.848801  319511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/old-k8s-version-709593/id_rsa Username:docker}
	I1123 09:58:12.859586  319511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/old-k8s-version-709593/id_rsa Username:docker}
	I1123 09:58:12.873468  319511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/old-k8s-version-709593/id_rsa Username:docker}
	I1123 09:58:12.937177  319511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:12.952126  319511 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-709593" to be "Ready" ...
	I1123 09:58:12.969086  319511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:12.976204  319511 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 09:58:12.976229  319511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 09:58:12.983442  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:58:12.983473  319511 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:58:13.001233  319511 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 09:58:13.001267  319511 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 09:58:13.004404  319511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:13.009276  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:58:13.009301  319511 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:58:13.029767  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:58:13.029801  319511 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:58:13.038403  319511 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:13.038429  319511 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 09:58:13.050997  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:58:13.051025  319511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:58:13.058579  319511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:13.071241  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:58:13.071363  319511 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:58:13.092144  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:58:13.092175  319511 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:58:13.111646  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:58:13.111676  319511 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:58:13.134509  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:58:13.134541  319511 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:58:13.154447  319511 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:13.154473  319511 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:58:13.169068  319511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:15.364147  319511 node_ready.go:49] node "old-k8s-version-709593" is "Ready"
	I1123 09:58:15.364190  319511 node_ready.go:38] duration metric: took 2.412025869s for node "old-k8s-version-709593" to be "Ready" ...
	I1123 09:58:15.364208  319511 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:58:15.364263  319511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1123 09:58:13.031451  311138 node_ready.go:57] node "default-k8s-diff-port-696492" has "Ready":"False" status (will retry)
	W1123 09:58:15.032219  311138 node_ready.go:57] node "default-k8s-diff-port-696492" has "Ready":"False" status (will retry)
	I1123 09:58:16.357752  319511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.353308526s)
	I1123 09:58:16.358184  319511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.389065962s)
	I1123 09:58:16.534484  319511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.475808496s)
	I1123 09:58:16.534610  319511 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-709593"
	I1123 09:58:16.907361  319511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.738234252s)
	I1123 09:58:16.907416  319511 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.54313413s)
	I1123 09:58:16.907568  319511 api_server.go:72] duration metric: took 4.149434047s to wait for apiserver process to appear ...
	I1123 09:58:16.907584  319511 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:58:16.907604  319511 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:58:16.912236  319511 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-709593 addons enable metrics-server
	
	I1123 09:58:16.915026  319511 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:58:16.916617  319511 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1123 09:58:15.088796  322139 out.go:252] * Restarting existing docker container for "embed-certs-412583" ...
	I1123 09:58:15.088879  322139 cli_runner.go:164] Run: docker start embed-certs-412583
	I1123 09:58:15.542380  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:15.569667  322139 kic.go:430] container "embed-certs-412583" state is running.
	I1123 09:58:15.570202  322139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412583
	I1123 09:58:15.603987  322139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/config.json ...
	I1123 09:58:15.604290  322139 machine.go:94] provisionDockerMachine start ...
	I1123 09:58:15.604407  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:15.631755  322139 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:15.632218  322139 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 09:58:15.632283  322139 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:58:15.633493  322139 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51154->127.0.0.1:33118: read: connection reset by peer
	I1123 09:58:18.784189  322139 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412583
	
	I1123 09:58:18.784226  322139 ubuntu.go:182] provisioning hostname "embed-certs-412583"
	I1123 09:58:18.784312  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:18.804215  322139 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:18.804525  322139 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 09:58:18.804542  322139 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-412583 && echo "embed-certs-412583" | sudo tee /etc/hostname
	I1123 09:58:18.963550  322139 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-412583
	
	I1123 09:58:18.963630  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:18.985155  322139 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:18.985406  322139 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1123 09:58:18.985436  322139 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-412583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-412583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-412583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:58:19.136033  322139 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:58:19.136073  322139 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3552/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3552/.minikube}
	I1123 09:58:19.136099  322139 ubuntu.go:190] setting up certificates
	I1123 09:58:19.136127  322139 provision.go:84] configureAuth start
	I1123 09:58:19.136188  322139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412583
	I1123 09:58:19.156884  322139 provision.go:143] copyHostCerts
	I1123 09:58:19.156946  322139 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem, removing ...
	I1123 09:58:19.156960  322139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem
	I1123 09:58:19.157038  322139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem (1123 bytes)
	I1123 09:58:19.157162  322139 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem, removing ...
	I1123 09:58:19.157175  322139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem
	I1123 09:58:19.157204  322139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem (1679 bytes)
	I1123 09:58:19.157275  322139 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem, removing ...
	I1123 09:58:19.157283  322139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem
	I1123 09:58:19.157306  322139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem (1082 bytes)
	I1123 09:58:19.157389  322139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem org=jenkins.embed-certs-412583 san=[127.0.0.1 192.168.103.2 embed-certs-412583 localhost minikube]
	I1123 09:58:19.341356  322139 provision.go:177] copyRemoteCerts
	I1123 09:58:19.341419  322139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:58:19.341455  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.361384  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:19.468486  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:58:19.491156  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:58:19.513792  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:58:19.535858  322139 provision.go:87] duration metric: took 399.716299ms to configureAuth
	I1123 09:58:19.535897  322139 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:58:19.536067  322139 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:19.536082  322139 machine.go:97] duration metric: took 3.93177997s to provisionDockerMachine
	I1123 09:58:19.536090  322139 start.go:293] postStartSetup for "embed-certs-412583" (driver="docker")
	I1123 09:58:19.536098  322139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:58:19.536142  322139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:58:19.536178  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.559284  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:19.665552  322139 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:58:19.669817  322139 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:58:19.669850  322139 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:58:19.669864  322139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/addons for local assets ...
	I1123 09:58:19.669920  322139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/files for local assets ...
	I1123 09:58:19.670030  322139 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem -> 71092.pem in /etc/ssl/certs
	I1123 09:58:19.670160  322139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:58:19.679598  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:58:19.700393  322139 start.go:296] duration metric: took 164.286793ms for postStartSetup
	I1123 09:58:19.700617  322139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:58:19.700679  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.723251  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:15.423567  322309 out.go:252] * Restarting existing docker container for "no-preload-309734" ...
	I1123 09:58:15.423745  322309 cli_runner.go:164] Run: docker start no-preload-309734
	I1123 09:58:15.785997  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:15.812042  322309 kic.go:430] container "no-preload-309734" state is running.
	I1123 09:58:15.812604  322309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-309734
	I1123 09:58:15.840096  322309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/config.json ...
	I1123 09:58:15.840311  322309 machine.go:94] provisionDockerMachine start ...
	I1123 09:58:15.840397  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:15.861975  322309 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:15.862276  322309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:58:15.862298  322309 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:58:15.862938  322309 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40370->127.0.0.1:33123: read: connection reset by peer
	I1123 09:58:19.014724  322309 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-309734
	
	I1123 09:58:19.014760  322309 ubuntu.go:182] provisioning hostname "no-preload-309734"
	I1123 09:58:19.014837  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.035897  322309 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:19.036158  322309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:58:19.036180  322309 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-309734 && echo "no-preload-309734" | sudo tee /etc/hostname
	I1123 09:58:19.197111  322309 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-309734
	
	I1123 09:58:19.197222  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.217292  322309 main.go:143] libmachine: Using SSH client type: native
	I1123 09:58:19.217599  322309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:58:19.217634  322309 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-309734' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-309734/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-309734' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:58:19.369774  322309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:58:19.369800  322309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3552/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3552/.minikube}
	I1123 09:58:19.369826  322309 ubuntu.go:190] setting up certificates
	I1123 09:58:19.369838  322309 provision.go:84] configureAuth start
	I1123 09:58:19.369907  322309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-309734
	I1123 09:58:19.390827  322309 provision.go:143] copyHostCerts
	I1123 09:58:19.390891  322309 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem, removing ...
	I1123 09:58:19.390907  322309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem
	I1123 09:58:19.390973  322309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/ca.pem (1082 bytes)
	I1123 09:58:19.391077  322309 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem, removing ...
	I1123 09:58:19.391092  322309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem
	I1123 09:58:19.391117  322309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/cert.pem (1123 bytes)
	I1123 09:58:19.391233  322309 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem, removing ...
	I1123 09:58:19.391244  322309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem
	I1123 09:58:19.391264  322309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3552/.minikube/key.pem (1679 bytes)
	I1123 09:58:19.391312  322309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem org=jenkins.no-preload-309734 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-309734]
	I1123 09:58:19.511909  322309 provision.go:177] copyRemoteCerts
	I1123 09:58:19.511965  322309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:58:19.512011  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.533318  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:19.642831  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:58:19.662615  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 09:58:19.684205  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:58:19.704592  322309 provision.go:87] duration metric: took 334.741077ms to configureAuth
	I1123 09:58:19.704633  322309 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:58:19.704835  322309 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:19.704853  322309 machine.go:97] duration metric: took 3.864533097s to provisionDockerMachine
	I1123 09:58:19.704864  322309 start.go:293] postStartSetup for "no-preload-309734" (driver="docker")
	I1123 09:58:19.704876  322309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:58:19.704946  322309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:58:19.704998  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.725972  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:19.830882  322309 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:58:19.835302  322309 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:58:19.835376  322309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:58:19.835404  322309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/addons for local assets ...
	I1123 09:58:19.835474  322309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3552/.minikube/files for local assets ...
	I1123 09:58:19.835585  322309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem -> 71092.pem in /etc/ssl/certs
	I1123 09:58:19.835733  322309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:58:19.845344  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:58:19.869872  322309 start.go:296] duration metric: took 164.993501ms for postStartSetup
	I1123 09:58:19.869963  322309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:58:19.870010  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:19.891645  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:19.995281  322309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:58:20.000619  322309 fix.go:56] duration metric: took 4.613765689s for fixHost
	I1123 09:58:20.000652  322309 start.go:83] releasing machines lock for "no-preload-309734", held for 4.613822979s
	I1123 09:58:20.000767  322309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-309734
	I1123 09:58:20.022558  322309 ssh_runner.go:195] Run: cat /version.json
	I1123 09:58:20.022576  322309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:58:20.022624  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:20.022662  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:20.044105  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:20.044757  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:16.916812  319511 api_server.go:141] control plane version: v1.28.0
	I1123 09:58:16.916844  319511 api_server.go:131] duration metric: took 9.252525ms to wait for apiserver health ...
	I1123 09:58:16.916855  319511 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:58:16.918447  319511 addons.go:530] duration metric: took 4.159873845s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1123 09:58:16.922096  319511 system_pods.go:59] 9 kube-system pods found
	I1123 09:58:16.922144  319511 system_pods.go:61] "coredns-5dd5756b68-gf5sx" [9a493920-3739-4eb9-8426-3590a8f2ee51] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:16.922158  319511 system_pods.go:61] "etcd-old-k8s-version-709593" [ae440f4a-2d2c-44c8-9481-9696039f9cea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:16.922169  319511 system_pods.go:61] "kindnet-tpvt2" [fd3daece-c28b-4efa-ae53-16c16790e5be] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:16.922182  319511 system_pods.go:61] "kube-apiserver-old-k8s-version-709593" [e9aebd01-2f2f-4e8e-b3b9-365be3da678e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:16.922197  319511 system_pods.go:61] "kube-controller-manager-old-k8s-version-709593" [35acfac2-d03f-4f28-b69f-0d34ef891c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:16.922209  319511 system_pods.go:61] "kube-proxy-sgv48" [f5d963bd-a2f2-44d2-969c-d219c55aba33] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:58:16.922223  319511 system_pods.go:61] "kube-scheduler-old-k8s-version-709593" [8d265257-a737-4543-b416-8535ffae7725] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:16.922235  319511 system_pods.go:61] "metrics-server-57f55c9bc5-98n6p" [7086738c-57f8-491c-abfa-bfa7c99c5a03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:16.922243  319511 system_pods.go:61] "storage-provisioner" [ba58926e-fdf3-4750-b44d-7c94a027737e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:16.922268  319511 system_pods.go:74] duration metric: took 5.404916ms to wait for pod list to return data ...
	I1123 09:58:16.922278  319511 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:58:16.925487  319511 default_sa.go:45] found service account: "default"
	I1123 09:58:16.925518  319511 default_sa.go:55] duration metric: took 3.233126ms for default service account to be created ...
	I1123 09:58:16.925530  319511 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:58:16.931146  319511 system_pods.go:86] 9 kube-system pods found
	I1123 09:58:16.931197  319511 system_pods.go:89] "coredns-5dd5756b68-gf5sx" [9a493920-3739-4eb9-8426-3590a8f2ee51] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:16.931213  319511 system_pods.go:89] "etcd-old-k8s-version-709593" [ae440f4a-2d2c-44c8-9481-9696039f9cea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:16.931224  319511 system_pods.go:89] "kindnet-tpvt2" [fd3daece-c28b-4efa-ae53-16c16790e5be] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:16.931234  319511 system_pods.go:89] "kube-apiserver-old-k8s-version-709593" [e9aebd01-2f2f-4e8e-b3b9-365be3da678e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:16.931247  319511 system_pods.go:89] "kube-controller-manager-old-k8s-version-709593" [35acfac2-d03f-4f28-b69f-0d34ef891c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:16.931261  319511 system_pods.go:89] "kube-proxy-sgv48" [f5d963bd-a2f2-44d2-969c-d219c55aba33] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:58:16.931269  319511 system_pods.go:89] "kube-scheduler-old-k8s-version-709593" [8d265257-a737-4543-b416-8535ffae7725] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:16.931280  319511 system_pods.go:89] "metrics-server-57f55c9bc5-98n6p" [7086738c-57f8-491c-abfa-bfa7c99c5a03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:16.931288  319511 system_pods.go:89] "storage-provisioner" [ba58926e-fdf3-4750-b44d-7c94a027737e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:16.931302  319511 system_pods.go:126] duration metric: took 5.763498ms to wait for k8s-apps to be running ...
	I1123 09:58:16.931317  319511 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:58:16.931414  319511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:58:16.947858  319511 system_svc.go:56] duration metric: took 16.533152ms WaitForService to wait for kubelet
	I1123 09:58:16.947892  319511 kubeadm.go:587] duration metric: took 4.189759298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:16.947917  319511 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:58:16.950929  319511 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:58:16.950953  319511 node_conditions.go:123] node cpu capacity is 8
	I1123 09:58:16.950968  319511 node_conditions.go:105] duration metric: took 3.045706ms to run NodePressure ...
	I1123 09:58:16.950978  319511 start.go:242] waiting for startup goroutines ...
	I1123 09:58:16.950985  319511 start.go:247] waiting for cluster config update ...
	I1123 09:58:16.950995  319511 start.go:256] writing updated cluster config ...
	I1123 09:58:16.951224  319511 ssh_runner.go:195] Run: rm -f paused
	I1123 09:58:16.956007  319511 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:16.960673  319511 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gf5sx" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:58:18.967689  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	I1123 09:58:19.826235  322139 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:58:19.831841  322139 fix.go:56] duration metric: took 4.765999719s for fixHost
	I1123 09:58:19.831872  322139 start.go:83] releasing machines lock for "embed-certs-412583", held for 4.766074158s
	I1123 09:58:19.831944  322139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-412583
	I1123 09:58:19.853394  322139 ssh_runner.go:195] Run: cat /version.json
	I1123 09:58:19.853416  322139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:58:19.853450  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.853513  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:19.876679  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:19.876891  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:20.038564  322139 ssh_runner.go:195] Run: systemctl --version
	I1123 09:58:20.047319  322139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:58:20.052518  322139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:58:20.052594  322139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:58:20.061644  322139 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:58:20.061671  322139 start.go:496] detecting cgroup driver to use...
	I1123 09:58:20.061718  322139 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:58:20.061779  322139 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 09:58:20.081435  322139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 09:58:20.097344  322139 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:58:20.097421  322139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:58:20.113725  322139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:58:20.128400  322139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:58:20.231648  322139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:58:20.316032  322139 docker.go:234] disabling docker service ...
	I1123 09:58:20.316100  322139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:58:20.331383  322139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:58:20.347697  322139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:58:20.472315  322139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:58:20.579927  322139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:58:20.596227  322139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:58:20.625764  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 09:58:20.637317  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 09:58:20.647853  322139 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 09:58:20.647915  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 09:58:20.658746  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:58:20.669170  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 09:58:20.679447  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:58:20.689943  322139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:58:20.699586  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 09:58:20.714077  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 09:58:20.725403  322139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 09:58:20.736101  322139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:58:20.744980  322139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:58:20.755319  322139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:20.869025  322139 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 09:58:21.025087  322139 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 09:58:21.025164  322139 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 09:58:21.030199  322139 start.go:564] Will wait 60s for crictl version
	I1123 09:58:21.030278  322139 ssh_runner.go:195] Run: which crictl
	I1123 09:58:21.035718  322139 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:58:21.067308  322139 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 09:58:21.067444  322139 ssh_runner.go:195] Run: containerd --version
	I1123 09:58:21.090291  322139 ssh_runner.go:195] Run: containerd --version
	I1123 09:58:21.116354  322139 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1123 09:58:17.530438  311138 node_ready.go:57] node "default-k8s-diff-port-696492" has "Ready":"False" status (will retry)
	W1123 09:58:19.531594  311138 node_ready.go:57] node "default-k8s-diff-port-696492" has "Ready":"False" status (will retry)
	I1123 09:58:20.531620  311138 node_ready.go:49] node "default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:20.531690  311138 node_ready.go:38] duration metric: took 11.50429796s for node "default-k8s-diff-port-696492" to be "Ready" ...
	I1123 09:58:20.531711  311138 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:58:20.531779  311138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:58:20.548914  311138 api_server.go:72] duration metric: took 11.829659475s to wait for apiserver process to appear ...
	I1123 09:58:20.548948  311138 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:58:20.548973  311138 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 09:58:20.556266  311138 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 09:58:20.557639  311138 api_server.go:141] control plane version: v1.34.1
	I1123 09:58:20.557673  311138 api_server.go:131] duration metric: took 8.71495ms to wait for apiserver health ...
	I1123 09:58:20.557685  311138 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:58:20.563372  311138 system_pods.go:59] 8 kube-system pods found
	I1123 09:58:20.563584  311138 system_pods.go:61] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:20.563630  311138 system_pods.go:61] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:20.563642  311138 system_pods.go:61] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:20.563658  311138 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:20.563666  311138 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:20.563673  311138 system_pods.go:61] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:20.563680  311138 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:20.563699  311138 system_pods.go:61] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:20.563708  311138 system_pods.go:74] duration metric: took 6.015429ms to wait for pod list to return data ...
	I1123 09:58:20.563720  311138 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:58:20.570703  311138 default_sa.go:45] found service account: "default"
	I1123 09:58:20.570736  311138 default_sa.go:55] duration metric: took 7.009974ms for default service account to be created ...
	I1123 09:58:20.570746  311138 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:58:20.575207  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:20.575242  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:20.575249  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:20.575255  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:20.575259  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:20.575263  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:20.575266  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:20.575270  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:20.575274  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:20.575296  311138 retry.go:31] will retry after 192.26313ms: missing components: kube-dns
	I1123 09:58:20.775706  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:20.775755  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:20.775763  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:20.775771  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:20.775777  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:20.775783  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:20.775789  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:20.775794  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:20.775801  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:20.775821  311138 retry.go:31] will retry after 254.648665ms: missing components: kube-dns
	I1123 09:58:21.035635  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:21.035673  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:21.035679  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:21.035686  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:21.035689  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:21.035694  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:21.035697  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:21.035703  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:21.035708  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:21.035722  311138 retry.go:31] will retry after 331.46599ms: missing components: kube-dns
	I1123 09:58:20.222065  322309 ssh_runner.go:195] Run: systemctl --version
	I1123 09:58:20.228990  322309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:58:20.234556  322309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:58:20.234627  322309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:58:20.243545  322309 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:58:20.243573  322309 start.go:496] detecting cgroup driver to use...
	I1123 09:58:20.243611  322309 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:58:20.243660  322309 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 09:58:20.264548  322309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 09:58:20.280079  322309 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:58:20.280150  322309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:58:20.296816  322309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:58:20.310745  322309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:58:20.413004  322309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:58:20.527067  322309 docker.go:234] disabling docker service ...
	I1123 09:58:20.527157  322309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:58:20.546148  322309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:58:20.566688  322309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:58:20.669747  322309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:58:20.762925  322309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:58:20.781366  322309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:58:20.809420  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 09:58:20.821115  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 09:58:20.833147  322309 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 09:58:20.833216  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 09:58:20.844744  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:58:20.855660  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 09:58:20.869004  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 09:58:20.881745  322309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:58:20.893684  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 09:58:20.908080  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 09:58:20.921743  322309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 09:58:20.934863  322309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:58:20.946434  322309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:58:20.957671  322309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:21.068818  322309 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 09:58:21.183026  322309 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 09:58:21.183124  322309 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 09:58:21.188002  322309 start.go:564] Will wait 60s for crictl version
	I1123 09:58:21.188177  322309 ssh_runner.go:195] Run: which crictl
	I1123 09:58:21.192973  322309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:58:21.221014  322309 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 09:58:21.221084  322309 ssh_runner.go:195] Run: containerd --version
	I1123 09:58:21.246163  322309 ssh_runner.go:195] Run: containerd --version
	I1123 09:58:21.272718  322309 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 09:58:21.117867  322139 cli_runner.go:164] Run: docker network inspect embed-certs-412583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:58:21.142651  322139 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 09:58:21.147763  322139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:58:21.162167  322139 kubeadm.go:884] updating cluster {Name:embed-certs-412583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:58:21.162356  322139 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:58:21.162432  322139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:58:21.193588  322139 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:58:21.193608  322139 containerd.go:534] Images already preloaded, skipping extraction
	I1123 09:58:21.193664  322139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:58:21.220984  322139 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:58:21.221009  322139 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:58:21.221020  322139 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1123 09:58:21.221142  322139 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-412583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:58:21.221200  322139 ssh_runner.go:195] Run: sudo crictl info
	I1123 09:58:21.253087  322139 cni.go:84] Creating CNI manager for ""
	I1123 09:58:21.253125  322139 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:58:21.253161  322139 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:58:21.253198  322139 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-412583 NodeName:embed-certs-412583 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:58:21.253456  322139 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-412583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:58:21.253546  322139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:58:21.264720  322139 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:58:21.264808  322139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:58:21.274656  322139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1123 09:58:21.290412  322139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:58:21.306023  322139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1123 09:58:21.320803  322139 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:58:21.325391  322139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:58:21.339594  322139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:21.433463  322139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:21.462263  322139 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583 for IP: 192.168.103.2
	I1123 09:58:21.462293  322139 certs.go:195] generating shared ca certs ...
	I1123 09:58:21.462313  322139 certs.go:227] acquiring lock for ca certs: {Name:mkf0ec2efb8866dd9406da39e0a5f5dc931fd377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:21.462496  322139 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key
	I1123 09:58:21.462555  322139 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key
	I1123 09:58:21.462571  322139 certs.go:257] generating profile certs ...
	I1123 09:58:21.462693  322139 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/client.key
	I1123 09:58:21.462760  322139 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/apiserver.key.2b18ab85
	I1123 09:58:21.462855  322139 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/proxy-client.key
	I1123 09:58:21.463004  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem (1338 bytes)
	W1123 09:58:21.463065  322139 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109_empty.pem, impossibly tiny 0 bytes
	I1123 09:58:21.463079  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:58:21.463130  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:58:21.463175  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:58:21.463211  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem (1679 bytes)
	I1123 09:58:21.463273  322139 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:58:21.463971  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:58:21.488159  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:58:21.518581  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:58:21.541410  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:58:21.573302  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 09:58:21.605581  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:58:21.635805  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:58:21.661426  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/embed-certs-412583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:58:21.683895  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem --> /usr/share/ca-certificates/7109.pem (1338 bytes)
	I1123 09:58:21.714410  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /usr/share/ca-certificates/71092.pem (1708 bytes)
	I1123 09:58:21.743544  322139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:58:21.767759  322139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:58:21.782417  322139 ssh_runner.go:195] Run: openssl version
	I1123 09:58:21.789653  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71092.pem && ln -fs /usr/share/ca-certificates/71092.pem /etc/ssl/certs/71092.pem"
	I1123 09:58:21.800679  322139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.805240  322139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:26 /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.805305  322139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.853974  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71092.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:58:21.864801  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:58:21.874638  322139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.879072  322139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.879154  322139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.918122  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:58:21.927321  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7109.pem && ln -fs /usr/share/ca-certificates/7109.pem /etc/ssl/certs/7109.pem"
	I1123 09:58:21.937415  322139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7109.pem
	I1123 09:58:21.941575  322139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:26 /usr/share/ca-certificates/7109.pem
	I1123 09:58:21.941637  322139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7109.pem
	I1123 09:58:21.981304  322139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7109.pem /etc/ssl/certs/51391683.0"
	I1123 09:58:21.989854  322139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:58:21.994318  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:58:22.046395  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:58:22.109666  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:58:22.189653  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:58:22.263782  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:58:22.341044  322139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:58:22.411923  322139 kubeadm.go:401] StartCluster: {Name:embed-certs-412583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-412583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:58:22.412038  322139 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 09:58:22.412095  322139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:58:22.522668  322139 cri.go:89] found id: "04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1"
	I1123 09:58:22.522702  322139 cri.go:89] found id: "307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac"
	I1123 09:58:22.522708  322139 cri.go:89] found id: "7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79"
	I1123 09:58:22.522712  322139 cri.go:89] found id: "02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54"
	I1123 09:58:22.522716  322139 cri.go:89] found id: "db362a96711e632c28850e0db72bab38f1e01f39f309dbb4359fa29d0545b2a4"
	I1123 09:58:22.522721  322139 cri.go:89] found id: "01f6da8fb3f7dfb36a0d1bf7ac34fa2c7715a85d4db29e51e680371cf976de98"
	I1123 09:58:22.522724  322139 cri.go:89] found id: "de43573b10ccd2db93907531b927156400b38e1ccc072df4694f86271eadb2a7"
	I1123 09:58:22.522728  322139 cri.go:89] found id: "c59b716fcc34de4cd73575b55a3765828129eb26a8da3f4e32971f259a35d5b9"
	I1123 09:58:22.522732  322139 cri.go:89] found id: "ea002215dc5ff9de708bfb501c13731db3b837342413eaa850d2bdaa9db3326b"
	I1123 09:58:22.522741  322139 cri.go:89] found id: "786d0436a85fd77d6e60804d917a286d3d71195fdb79aff7ac861499ed514dbf"
	I1123 09:58:22.522746  322139 cri.go:89] found id: "72aa47eb89fbb59da47429e762a23f4e68077fe27b50deb7d4860da7370e5f9b"
	I1123 09:58:22.522750  322139 cri.go:89] found id: "0275433c40df693012ccd198e9424273105899b21f0e3e75bc2219ef022bdec2"
	I1123 09:58:22.522754  322139 cri.go:89] found id: ""
	I1123 09:58:22.522808  322139 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 09:58:22.575489  322139 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54","pid":915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54/rootfs","created":"2025-11-23T09:58:22.307218143Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6342b63ea1fe8850287e5288573654a5"},"owner":"root"},{"ociVersion":"1.2.1","id":"04d113f6abe1bb9e310df54f359895f1d3038255f25e995
d19aed64e023780a1","pid":988,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1/rootfs","created":"2025-11-23T09:58:22.471116692Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7","io.kubernetes.cri.sandbox-name":"kube-scheduler-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"48b1774c2a81341d1b596102d3c6374b"},"owner":"root"},{"ociVersion":"1.2.1","id":"307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac","pid":965,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/
k8s.io/307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac/rootfs","created":"2025-11-23T09:58:22.428732839Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4910cba9d7ad0b0fc7314f9642a97b8c"},"owner":"root"},{"ociVersion":"1.2.1","id":"4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038","pid":859,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038","rootfs":"/run/containerd/io.co
ntainerd.runtime.v2.task/k8s.io/4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038/rootfs","created":"2025-11-23T09:58:22.146071319Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-412583_a16dd0b1b9cc0f64fa36d85cacd3aa9f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a16dd0b1b9cc0f64fa36d85cacd3aa9f"},"owner":"root"},{"ociVersion":"1.2.1","id":"48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57","pid":850,"status
":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57/rootfs","created":"2025-11-23T09:58:22.130629369Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-412583_4910cba9d7ad0b0fc7314f9642a97b8c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4910cba9
d7ad0b0fc7314f9642a97b8c"},"owner":"root"},{"ociVersion":"1.2.1","id":"7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79","pid":936,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79/rootfs","created":"2025-11-23T09:58:22.383028386Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a16dd0b1b9cc0f64fa36d85cacd3aa9f"},"owner":"root"},{"ociVersion":"1.2.1","id":"87353c
65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c","pid":767,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c/rootfs","created":"2025-11-23T09:58:22.094263435Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-412583_6342b63ea1fe8850287e5288573654a5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-embed-certs-412583","io.kubernetes.cri.sandbox-namespace
":"kube-system","io.kubernetes.cri.sandbox-uid":"6342b63ea1fe8850287e5288573654a5"},"owner":"root"},{"ociVersion":"1.2.1","id":"d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7","pid":867,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7/rootfs","created":"2025-11-23T09:58:22.152374682Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-412583_48b1774c2a81341d1b596102
d3c6374b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-embed-certs-412583","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"48b1774c2a81341d1b596102d3c6374b"},"owner":"root"}]
	I1123 09:58:22.575707  322139 cri.go:126] list returned 8 containers
	I1123 09:58:22.575725  322139 cri.go:129] container: {ID:02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54 Status:running}
	I1123 09:58:22.575759  322139 cri.go:135] skipping {02c6f5b667ffd7c657eae93a8d5e7cb1fd1b809d7d00cf6adcd749d6f9f82f54 running}: state = "running", want "paused"
	I1123 09:58:22.575771  322139 cri.go:129] container: {ID:04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1 Status:created}
	I1123 09:58:22.575779  322139 cri.go:135] skipping {04d113f6abe1bb9e310df54f359895f1d3038255f25e995d19aed64e023780a1 created}: state = "created", want "paused"
	I1123 09:58:22.575791  322139 cri.go:129] container: {ID:307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac Status:running}
	I1123 09:58:22.575797  322139 cri.go:135] skipping {307ceccb50accd0c4f0a38e216451925bfe88e3967ed982a5dacde30cf71b0ac running}: state = "running", want "paused"
	I1123 09:58:22.575803  322139 cri.go:129] container: {ID:4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038 Status:running}
	I1123 09:58:22.575819  322139 cri.go:131] skipping 4491ea1f47d494bc8400ace2d3ddc41f09d3b80458651cc24216703a8c48a038 - not in ps
	I1123 09:58:22.575824  322139 cri.go:129] container: {ID:48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57 Status:running}
	I1123 09:58:22.575842  322139 cri.go:131] skipping 48a7453a4ec1d48ea28ab7a2a0089797c362f3d8f56b6acc0ed1c854629aab57 - not in ps
	I1123 09:58:22.575847  322139 cri.go:129] container: {ID:7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79 Status:running}
	I1123 09:58:22.575860  322139 cri.go:135] skipping {7cd8d581cd947ec50b444692aacf791c262929511eac08cf07556546cd21eb79 running}: state = "running", want "paused"
	I1123 09:58:22.575876  322139 cri.go:129] container: {ID:87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c Status:running}
	I1123 09:58:22.575889  322139 cri.go:131] skipping 87353c65472de68af651f086a916a190e622b9f49c4c692db9518b84ac842d7c - not in ps
	I1123 09:58:22.575895  322139 cri.go:129] container: {ID:d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7 Status:running}
	I1123 09:58:22.575902  322139 cri.go:131] skipping d36e42b5560e8cf07dd572abbe31159305f11dd144290adcd8683689748434b7 - not in ps
	I1123 09:58:22.575953  322139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:58:22.611879  322139 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:58:22.611902  322139 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:58:22.611954  322139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:58:22.642294  322139 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:58:22.643276  322139 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-412583" does not appear in /home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:22.644366  322139 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-3552/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-412583" cluster setting kubeconfig missing "embed-certs-412583" context setting]
	I1123 09:58:22.645824  322139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/kubeconfig: {Name:mka3871857a2712d9b8d0b57e593926fb298dec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.649056  322139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:58:22.674727  322139 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1123 09:58:22.674893  322139 kubeadm.go:602] duration metric: took 62.98229ms to restartPrimaryControlPlane
	I1123 09:58:22.674941  322139 kubeadm.go:403] duration metric: took 263.030265ms to StartCluster
	I1123 09:58:22.675020  322139 settings.go:142] acquiring lock: {Name:mkf22dae3e46f0832bb83531ab4e1d4bfda0dd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.675204  322139 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:22.677946  322139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/kubeconfig: {Name:mka3871857a2712d9b8d0b57e593926fb298dec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.678263  322139 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:58:22.678628  322139 config.go:182] Loaded profile config "embed-certs-412583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:22.678642  322139 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:58:22.679078  322139 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-412583"
	I1123 09:58:22.679096  322139 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-412583"
	I1123 09:58:22.679094  322139 addons.go:70] Setting default-storageclass=true in profile "embed-certs-412583"
	I1123 09:58:22.679109  322139 addons.go:70] Setting metrics-server=true in profile "embed-certs-412583"
	I1123 09:58:22.679121  322139 addons.go:239] Setting addon metrics-server=true in "embed-certs-412583"
	W1123 09:58:22.679126  322139 addons.go:248] addon metrics-server should already be in state true
	I1123 09:58:22.679129  322139 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-412583"
	I1123 09:58:22.679161  322139 host.go:66] Checking if "embed-certs-412583" exists ...
	I1123 09:58:22.679575  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.679663  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.679827  322139 addons.go:70] Setting dashboard=true in profile "embed-certs-412583"
	I1123 09:58:22.679848  322139 addons.go:239] Setting addon dashboard=true in "embed-certs-412583"
	W1123 09:58:22.679856  322139 addons.go:248] addon dashboard should already be in state true
	I1123 09:58:22.679887  322139 host.go:66] Checking if "embed-certs-412583" exists ...
	W1123 09:58:22.679104  322139 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:58:22.679985  322139 host.go:66] Checking if "embed-certs-412583" exists ...
	I1123 09:58:22.680425  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.680471  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.685805  322139 out.go:179] * Verifying Kubernetes components...
	I1123 09:58:22.689947  322139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:22.725041  322139 addons.go:239] Setting addon default-storageclass=true in "embed-certs-412583"
	W1123 09:58:22.725078  322139 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:58:22.725107  322139 host.go:66] Checking if "embed-certs-412583" exists ...
	I1123 09:58:22.725611  322139 cli_runner.go:164] Run: docker container inspect embed-certs-412583 --format={{.State.Status}}
	I1123 09:58:22.740887  322139 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 09:58:22.740940  322139 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:58:22.740998  322139 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 09:58:22.742416  322139 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:22.742442  322139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:58:22.742508  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:22.744008  322139 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 09:58:22.744037  322139 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 09:58:22.744119  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:22.744851  322139 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:58:21.274495  322309 cli_runner.go:164] Run: docker network inspect no-preload-309734 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:58:21.294049  322309 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 09:58:21.298367  322309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:58:21.310746  322309 kubeadm.go:884] updating cluster {Name:no-preload-309734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:58:21.310857  322309 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:58:21.310899  322309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:58:21.341106  322309 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 09:58:21.341133  322309 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:58:21.341149  322309 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1123 09:58:21.341280  322309 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-309734 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:58:21.341360  322309 ssh_runner.go:195] Run: sudo crictl info
	I1123 09:58:21.375033  322309 cni.go:84] Creating CNI manager for ""
	I1123 09:58:21.375065  322309 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:58:21.375080  322309 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:58:21.375106  322309 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-309734 NodeName:no-preload-309734 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:58:21.375251  322309 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-309734"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:58:21.375322  322309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:58:21.387913  322309 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:58:21.387991  322309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:58:21.397681  322309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 09:58:21.413039  322309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:58:21.427808  322309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1123 09:58:21.443045  322309 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:58:21.447908  322309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:58:21.460621  322309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:21.571138  322309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:21.600090  322309 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734 for IP: 192.168.94.2
	I1123 09:58:21.600121  322309 certs.go:195] generating shared ca certs ...
	I1123 09:58:21.600144  322309 certs.go:227] acquiring lock for ca certs: {Name:mkf0ec2efb8866dd9406da39e0a5f5dc931fd377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:21.600287  322309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key
	I1123 09:58:21.600394  322309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key
	I1123 09:58:21.600411  322309 certs.go:257] generating profile certs ...
	I1123 09:58:21.600533  322309 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/client.key
	I1123 09:58:21.600609  322309 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/apiserver.key.e5f9e7ec
	I1123 09:58:21.600680  322309 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/proxy-client.key
	I1123 09:58:21.600837  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem (1338 bytes)
	W1123 09:58:21.600886  322309 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109_empty.pem, impossibly tiny 0 bytes
	I1123 09:58:21.600905  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:58:21.600944  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/ca.pem (1082 bytes)
	I1123 09:58:21.600985  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:58:21.601024  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/certs/key.pem (1679 bytes)
	I1123 09:58:21.601090  322309 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem (1708 bytes)
	I1123 09:58:21.602145  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:58:21.634430  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:58:21.659812  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:58:21.682412  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:58:21.714212  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:58:21.741669  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:58:21.765516  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:58:21.786309  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/no-preload-309734/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:58:21.809102  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/ssl/certs/71092.pem --> /usr/share/ca-certificates/71092.pem (1708 bytes)
	I1123 09:58:21.833628  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:58:21.856900  322309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3552/.minikube/certs/7109.pem --> /usr/share/ca-certificates/7109.pem (1338 bytes)
	I1123 09:58:21.878576  322309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:58:21.893401  322309 ssh_runner.go:195] Run: openssl version
	I1123 09:58:21.900728  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71092.pem && ln -fs /usr/share/ca-certificates/71092.pem /etc/ssl/certs/71092.pem"
	I1123 09:58:21.910813  322309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.915460  322309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:26 /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.915523  322309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71092.pem
	I1123 09:58:21.954651  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71092.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:58:21.964837  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:58:21.975213  322309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.979550  322309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:21.979624  322309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:58:22.031316  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:58:22.042472  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7109.pem && ln -fs /usr/share/ca-certificates/7109.pem /etc/ssl/certs/7109.pem"
	I1123 09:58:22.053567  322309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7109.pem
	I1123 09:58:22.060156  322309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:26 /usr/share/ca-certificates/7109.pem
	I1123 09:58:22.060228  322309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7109.pem
	I1123 09:58:22.123945  322309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7109.pem /etc/ssl/certs/51391683.0"
	I1123 09:58:22.141229  322309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:58:22.149212  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:58:22.215773  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:58:22.307006  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:58:22.409864  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:58:22.488008  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:58:22.547183  322309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:58:22.632317  322309 kubeadm.go:401] StartCluster: {Name:no-preload-309734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-309734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:58:22.632463  322309 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 09:58:22.632537  322309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:58:22.820225  322309 cri.go:89] found id: "7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0"
	I1123 09:58:22.820247  322309 cri.go:89] found id: "b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484"
	I1123 09:58:22.820254  322309 cri.go:89] found id: "528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75"
	I1123 09:58:22.820259  322309 cri.go:89] found id: "aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426"
	I1123 09:58:22.820274  322309 cri.go:89] found id: "6d27e56eea5cbce298214845449af2e14588bbe77713319ed62e958be99d3ae7"
	I1123 09:58:22.820279  322309 cri.go:89] found id: "103095b7989eeb9782636e7c2857b6f8b7ec6b0d8f19a4d16401f43390b5b6c8"
	I1123 09:58:22.820283  322309 cri.go:89] found id: "5c49f9103fd4c18deec14e3758e958db34380a181d3ea11344ed107acc94faab"
	I1123 09:58:22.820287  322309 cri.go:89] found id: "b1f2f40f833522a80b40c076eb2228ca8ab64af5ae29ec412679554033ccf342"
	I1123 09:58:22.820291  322309 cri.go:89] found id: "d13615209a18dd7b287968a7f98989bb3ce87db942b906988e39fde11c294cce"
	I1123 09:58:22.820302  322309 cri.go:89] found id: "b7a0f8d20ac463989e63a3565c249816e2e20c9067287e9f2b8c3db6cfb05aab"
	I1123 09:58:22.820306  322309 cri.go:89] found id: "d3705422907a474de42f4b2ba1fea7490c10e3083855a79fad006ba545fab905"
	I1123 09:58:22.820311  322309 cri.go:89] found id: "a81288f6ae55b6a042b8f67e3e9eedfe1c61dd371e39e06133e14aee6f968eb3"
	I1123 09:58:22.820315  322309 cri.go:89] found id: ""
	I1123 09:58:22.820377  322309 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 09:58:22.897989  322309 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5","pid":859,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5/rootfs","created":"2025-11-23T09:58:22.314641016Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-309734_d1a9f5b1e4228d8308c268e4cff72a2a","io.kubernetes.cri.sand
box-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d1a9f5b1e4228d8308c268e4cff72a2a"},"owner":"root"},{"ociVersion":"1.2.1","id":"528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75","pid":956,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75/rootfs","created":"2025-11-23T09:58:22.573043592Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-309734","io.kubernet
es.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"621c440e8d9733cd5781b23a5d2d5f0f"},"owner":"root"},{"ociVersion":"1.2.1","id":"6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda/rootfs","created":"2025-11-23T09:58:22.304911081Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-309734_2
f0e3c5c71b122518e8f9d36a37eecf6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2f0e3c5c71b122518e8f9d36a37eecf6"},"owner":"root"},{"ociVersion":"1.2.1","id":"7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0","pid":973,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0/rootfs","created":"2025-11-23T09:58:22.577398869Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda","io.kubernetes.cri.sandbox-name":
"kube-scheduler-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2f0e3c5c71b122518e8f9d36a37eecf6"},"owner":"root"},{"ociVersion":"1.2.1","id":"aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426","pid":916,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426/rootfs","created":"2025-11-23T09:58:22.496529275Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77","io.kubernetes.cri.sandbox-name":"etcd-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0a0dd6d88a52ba9
00ac99a4488161e2b"},"owner":"root"},{"ociVersion":"1.2.1","id":"b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484","pid":958,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484/rootfs","created":"2025-11-23T09:58:22.564906359Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d1a9f5b1e4228d8308c268e4cff72a2a"},"owner":"root"},{"ociVersion":"1.2.1","id":"da35d734fa90bf
64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c","pid":845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c/rootfs","created":"2025-11-23T09:58:22.300157615Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-309734_621c440e8d9733cd5781b23a5d2d5f0f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-309734","io.kubernetes.cri.sandbox
-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"621c440e8d9733cd5781b23a5d2d5f0f"},"owner":"root"},{"ociVersion":"1.2.1","id":"e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77","pid":798,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77/rootfs","created":"2025-11-23T09:58:22.263111405Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-309734_0a0dd6d88a52ba900ac99a448
8161e2b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-309734","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0a0dd6d88a52ba900ac99a4488161e2b"},"owner":"root"}]
	I1123 09:58:22.898184  322309 cri.go:126] list returned 8 containers
	I1123 09:58:22.898197  322309 cri.go:129] container: {ID:0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5 Status:running}
	I1123 09:58:22.898227  322309 cri.go:131] skipping 0e7ef217b29881586cd043cfbc7dc8a456f07f3b5136a8643217551f522c64d5 - not in ps
	I1123 09:58:22.898234  322309 cri.go:129] container: {ID:528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75 Status:running}
	I1123 09:58:22.898244  322309 cri.go:135] skipping {528da9e711eda81fc2db244d270b7ad73d0db39317a08ee44e62a98b7a422e75 running}: state = "running", want "paused"
	I1123 09:58:22.898255  322309 cri.go:129] container: {ID:6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda Status:running}
	I1123 09:58:22.898268  322309 cri.go:131] skipping 6957f989ae00eb7cce85c7b5191eda7025c542b01b28786c02d8857138bbbfda - not in ps
	I1123 09:58:22.898273  322309 cri.go:129] container: {ID:7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0 Status:running}
	I1123 09:58:22.898280  322309 cri.go:135] skipping {7e8fac570a0a67f195a769b2ec23f3559a12a613d3c0b7bd53111013ccc132e0 running}: state = "running", want "paused"
	I1123 09:58:22.898286  322309 cri.go:129] container: {ID:aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426 Status:running}
	I1123 09:58:22.898292  322309 cri.go:135] skipping {aff8a96e9f47795ac47742b5100c91b5d677be8da1e8b29a8e93651c946e7426 running}: state = "running", want "paused"
	I1123 09:58:22.898299  322309 cri.go:129] container: {ID:b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484 Status:running}
	I1123 09:58:22.898306  322309 cri.go:135] skipping {b663a2618d3c7a61b94fdf390c3d26e81c8e3081c251ea500e08d58195f9c484 running}: state = "running", want "paused"
	I1123 09:58:22.898312  322309 cri.go:129] container: {ID:da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c Status:running}
	I1123 09:58:22.898320  322309 cri.go:131] skipping da35d734fa90bf64764c9df425ffdfc0f23540567dab65c90f8777c389ccbe2c - not in ps
	I1123 09:58:22.898325  322309 cri.go:129] container: {ID:e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77 Status:running}
	I1123 09:58:22.898341  322309 cri.go:131] skipping e463f20a9a42186d9b4f3b6f550188dafd9941f169b83c5d9540aa49c17ecc77 - not in ps
	I1123 09:58:22.898392  322309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:58:22.910936  322309 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:58:22.910956  322309 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:58:22.911008  322309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:58:22.926982  322309 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:58:22.928309  322309 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-309734" does not appear in /home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:22.929354  322309 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-3552/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-309734" cluster setting kubeconfig missing "no-preload-309734" context setting]
	I1123 09:58:22.931598  322309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/kubeconfig: {Name:mka3871857a2712d9b8d0b57e593926fb298dec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.933983  322309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:58:22.957568  322309 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1123 09:58:22.957607  322309 kubeadm.go:602] duration metric: took 46.644326ms to restartPrimaryControlPlane
	I1123 09:58:22.957618  322309 kubeadm.go:403] duration metric: took 325.308863ms to StartCluster
	I1123 09:58:22.957641  322309 settings.go:142] acquiring lock: {Name:mkf22dae3e46f0832bb83531ab4e1d4bfda0dd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.957705  322309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:58:22.960240  322309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3552/kubeconfig: {Name:mka3871857a2712d9b8d0b57e593926fb298dec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:58:22.960737  322309 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:58:22.961000  322309 config.go:182] Loaded profile config "no-preload-309734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:58:22.960842  322309 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:58:22.961080  322309 addons.go:70] Setting dashboard=true in profile "no-preload-309734"
	I1123 09:58:22.961088  322309 addons.go:70] Setting storage-provisioner=true in profile "no-preload-309734"
	I1123 09:58:22.961102  322309 addons.go:239] Setting addon dashboard=true in "no-preload-309734"
	I1123 09:58:22.961106  322309 addons.go:239] Setting addon storage-provisioner=true in "no-preload-309734"
	W1123 09:58:22.961111  322309 addons.go:248] addon dashboard should already be in state true
	W1123 09:58:22.961115  322309 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:58:22.961148  322309 host.go:66] Checking if "no-preload-309734" exists ...
	I1123 09:58:22.961153  322309 addons.go:70] Setting default-storageclass=true in profile "no-preload-309734"
	I1123 09:58:22.961188  322309 addons.go:70] Setting metrics-server=true in profile "no-preload-309734"
	I1123 09:58:22.961204  322309 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-309734"
	I1123 09:58:22.961212  322309 addons.go:239] Setting addon metrics-server=true in "no-preload-309734"
	W1123 09:58:22.961220  322309 addons.go:248] addon metrics-server should already be in state true
	I1123 09:58:22.961242  322309 host.go:66] Checking if "no-preload-309734" exists ...
	I1123 09:58:22.961148  322309 host.go:66] Checking if "no-preload-309734" exists ...
	I1123 09:58:22.961551  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:22.961668  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:22.961922  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:22.962571  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:22.963365  322309 out.go:179] * Verifying Kubernetes components...
	I1123 09:58:22.967624  322309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:58:23.000169  322309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:58:23.002026  322309 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:23.002116  322309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:58:23.002210  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:23.024527  322309 addons.go:239] Setting addon default-storageclass=true in "no-preload-309734"
	W1123 09:58:23.024576  322309 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:58:23.024797  322309 host.go:66] Checking if "no-preload-309734" exists ...
	I1123 09:58:23.026816  322309 cli_runner.go:164] Run: docker container inspect no-preload-309734 --format={{.State.Status}}
	I1123 09:58:23.033752  322309 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 09:58:23.036761  322309 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:58:23.038842  322309 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 09:58:23.039035  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:58:23.039539  322309 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:58:23.039699  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:21.373166  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:21.373204  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:21.373212  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:21.373220  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:21.373225  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:21.373231  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:21.373235  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:21.373241  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:21.373248  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:21.373267  311138 retry.go:31] will retry after 416.506633ms: missing components: kube-dns
	I1123 09:58:21.794744  311138 system_pods.go:86] 8 kube-system pods found
	I1123 09:58:21.794770  311138 system_pods.go:89] "coredns-66bc5c9577-49wlg" [967d1f43-a5b7-4bf8-8111-c014f4b7594f] Running
	I1123 09:58:21.794776  311138 system_pods.go:89] "etcd-default-k8s-diff-port-696492" [99ce30c3-ea20-422d-a7d8-4b8f58a70c07] Running
	I1123 09:58:21.794781  311138 system_pods.go:89] "kindnet-kx2hw" [1c3d2821-8e77-421a-8ccc-8d3d76d1380d] Running
	I1123 09:58:21.794787  311138 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-696492" [98117bb1-3ea0-4402-8845-6ee90c435d23] Running
	I1123 09:58:21.794793  311138 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-696492" [abb3ab85-565b-4911-8dbc-09ea147eb30b] Running
	I1123 09:58:21.794796  311138 system_pods.go:89] "kube-proxy-q6wsc" [ad2f26f5-ff1d-4acf-bea5-8ad34dc37130] Running
	I1123 09:58:21.794800  311138 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-696492" [b21530e3-7cc1-445f-82cd-1d11d79f9e20] Running
	I1123 09:58:21.794803  311138 system_pods.go:89] "storage-provisioner" [bbfe2e2e-e519-43f0-8575-91a152db45bf] Running
	I1123 09:58:21.794810  311138 system_pods.go:126] duration metric: took 1.224058938s to wait for k8s-apps to be running ...
	I1123 09:58:21.794819  311138 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:58:21.794860  311138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:58:21.810163  311138 system_svc.go:56] duration metric: took 15.335302ms WaitForService to wait for kubelet
	I1123 09:58:21.810195  311138 kubeadm.go:587] duration metric: took 13.090944371s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:21.810216  311138 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:58:21.813663  311138 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:58:21.813696  311138 node_conditions.go:123] node cpu capacity is 8
	I1123 09:58:21.813730  311138 node_conditions.go:105] duration metric: took 3.507443ms to run NodePressure ...
	I1123 09:58:21.813758  311138 start.go:242] waiting for startup goroutines ...
	I1123 09:58:21.813771  311138 start.go:247] waiting for cluster config update ...
	I1123 09:58:21.813790  311138 start.go:256] writing updated cluster config ...
	I1123 09:58:21.814128  311138 ssh_runner.go:195] Run: rm -f paused
	I1123 09:58:21.818537  311138 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:21.822899  311138 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-49wlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.828869  311138 pod_ready.go:94] pod "coredns-66bc5c9577-49wlg" is "Ready"
	I1123 09:58:21.828907  311138 pod_ready.go:86] duration metric: took 5.975283ms for pod "coredns-66bc5c9577-49wlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.831672  311138 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.836558  311138 pod_ready.go:94] pod "etcd-default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:21.836589  311138 pod_ready.go:86] duration metric: took 4.88699ms for pod "etcd-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.839055  311138 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.843948  311138 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:21.843979  311138 pod_ready.go:86] duration metric: took 4.896647ms for pod "kube-apiserver-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:21.846732  311138 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:22.223828  311138 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:22.223861  311138 pod_ready.go:86] duration metric: took 377.100636ms for pod "kube-controller-manager-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:22.425032  311138 pod_ready.go:83] waiting for pod "kube-proxy-q6wsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:22.826560  311138 pod_ready.go:94] pod "kube-proxy-q6wsc" is "Ready"
	I1123 09:58:22.826589  311138 pod_ready.go:86] duration metric: took 401.523413ms for pod "kube-proxy-q6wsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:23.029997  311138 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:23.424854  311138 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-696492" is "Ready"
	I1123 09:58:23.424899  311138 pod_ready.go:86] duration metric: took 394.877866ms for pod "kube-scheduler-default-k8s-diff-port-696492" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:58:23.424916  311138 pod_ready.go:40] duration metric: took 1.606342126s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:23.509609  311138 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:58:23.513916  311138 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-696492" cluster and "default" namespace by default
	I1123 09:58:22.746366  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:58:22.746394  322139 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:58:22.746461  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:22.767664  322139 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:22.767689  322139 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:58:22.767749  322139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-412583
	I1123 09:58:22.788700  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:22.792428  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:22.792696  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:22.826177  322139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/embed-certs-412583/id_rsa Username:docker}
	I1123 09:58:23.053750  322139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:23.107654  322139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:23.123973  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:58:23.124005  322139 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:58:23.126775  322139 node_ready.go:35] waiting up to 6m0s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:58:23.187851  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:58:23.187973  322139 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:58:23.217922  322139 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 09:58:23.218007  322139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 09:58:23.305984  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:58:23.306071  322139 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:58:23.307748  322139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:23.324529  322139 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 09:58:23.324565  322139 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 09:58:23.340572  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:58:23.340605  322139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:58:23.415883  322139 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:23.415916  322139 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 09:58:23.425591  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:58:23.425615  322139 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:58:23.474772  322139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:23.511681  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:58:23.511713  322139 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:58:23.599552  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:58:23.599584  322139 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:58:23.663716  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:58:23.663870  322139 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:58:23.704503  322139 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:23.704609  322139 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:58:23.787106  322139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:23.042109  322309 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 09:58:23.042193  322309 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 09:58:23.042297  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:23.073469  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:23.088819  322309 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:23.088845  322309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:58:23.089358  322309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-309734
	I1123 09:58:23.101695  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:23.104450  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:23.134905  322309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/no-preload-309734/id_rsa Username:docker}
	I1123 09:58:23.362213  322309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:58:23.382428  322309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:58:23.405607  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:58:23.405632  322309 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:58:23.418817  322309 node_ready.go:35] waiting up to 6m0s for node "no-preload-309734" to be "Ready" ...
	I1123 09:58:23.455580  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:58:23.455700  322309 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:58:23.477618  322309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 09:58:23.477639  322309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 09:58:23.518678  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:58:23.518707  322309 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:58:23.547427  322309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:58:23.587428  322309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 09:58:23.587479  322309 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 09:58:23.594823  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:58:23.594851  322309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:58:23.645814  322309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:23.645840  322309 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 09:58:23.649129  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:58:23.649206  322309 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:58:23.713186  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:58:23.713284  322309 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:58:23.743261  322309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:58:23.783546  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:58:23.783826  322309 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:58:23.933891  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:58:23.933976  322309 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:58:23.963869  322309 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:58:23.963895  322309 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:58:24.009293  322309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 09:58:20.970982  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	W1123 09:58:22.978532  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	W1123 09:58:25.470653  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	I1123 09:58:25.403342  322309 node_ready.go:49] node "no-preload-309734" is "Ready"
	I1123 09:58:25.403380  322309 node_ready.go:38] duration metric: took 1.984510855s for node "no-preload-309734" to be "Ready" ...
	I1123 09:58:25.403397  322309 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:58:25.403459  322309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:58:26.666727  322309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.284256705s)
	I1123 09:58:26.666807  322309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.119349474s)
	I1123 09:58:26.957105  322309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.213785139s)
	I1123 09:58:26.957147  322309 addons.go:495] Verifying addon metrics-server=true in "no-preload-309734"
	I1123 09:58:26.999989  322309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.990639389s)
	I1123 09:58:27.000614  322309 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.597077609s)
	I1123 09:58:27.000657  322309 api_server.go:72] duration metric: took 4.039883277s to wait for apiserver process to appear ...
	I1123 09:58:27.000673  322309 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:58:27.000695  322309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:58:27.004257  322309 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-309734 addons enable metrics-server
	
	I1123 09:58:27.008720  322309 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1123 09:58:25.303722  322139 node_ready.go:49] node "embed-certs-412583" is "Ready"
	I1123 09:58:25.303762  322139 node_ready.go:38] duration metric: took 2.176945691s for node "embed-certs-412583" to be "Ready" ...
	I1123 09:58:25.303846  322139 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:58:25.303947  322139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:58:27.038119  322139 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.730262779s)
	I1123 09:58:27.038194  322139 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.563347534s)
	I1123 09:58:27.038215  322139 addons.go:495] Verifying addon metrics-server=true in "embed-certs-412583"
	I1123 09:58:27.038425  322139 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.251268911s)
	I1123 09:58:27.038452  322139 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.734490994s)
	I1123 09:58:27.038474  322139 api_server.go:72] duration metric: took 4.360181863s to wait for apiserver process to appear ...
	I1123 09:58:27.038495  322139 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:58:27.038512  322139 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:58:27.038986  322139 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.931239095s)
	I1123 09:58:27.040662  322139 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-412583 addons enable metrics-server
	
	I1123 09:58:27.049441  322139 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:58:27.051186  322139 api_server.go:141] control plane version: v1.34.1
	I1123 09:58:27.051287  322139 api_server.go:131] duration metric: took 12.782895ms to wait for apiserver health ...
	I1123 09:58:27.051322  322139 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:58:27.057840  322139 system_pods.go:59] 9 kube-system pods found
	I1123 09:58:27.057894  322139 system_pods.go:61] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:27.057908  322139 system_pods.go:61] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:27.057919  322139 system_pods.go:61] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:27.057940  322139 system_pods.go:61] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:27.057947  322139 system_pods.go:61] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:27.057951  322139 system_pods.go:61] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:58:27.057957  322139 system_pods.go:61] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:27.057962  322139 system_pods.go:61] "metrics-server-746fcd58dc-5bq5f" [856d4db7-3788-41a2-98d4-e61a5d997e43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:27.057975  322139 system_pods.go:61] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:27.057988  322139 system_pods.go:74] duration metric: took 6.449125ms to wait for pod list to return data ...
	I1123 09:58:27.058002  322139 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:58:27.061637  322139 default_sa.go:45] found service account: "default"
	I1123 09:58:27.061669  322139 default_sa.go:55] duration metric: took 3.65968ms for default service account to be created ...
	I1123 09:58:27.061681  322139 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:58:27.062869  322139 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I1123 09:58:27.064609  322139 addons.go:530] duration metric: took 4.385954428s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I1123 09:58:27.066570  322139 system_pods.go:86] 9 kube-system pods found
	I1123 09:58:27.066606  322139 system_pods.go:89] "coredns-66bc5c9577-8dgc7" [f685cc03-30df-4119-9d66-0e808c2d3c93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:27.066621  322139 system_pods.go:89] "etcd-embed-certs-412583" [ea8b65e6-8c1f-4dda-8902-6b6be242b01f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:27.066629  322139 system_pods.go:89] "kindnet-f76c2" [16967e76-b4bf-4a99-aab9-d7f76cbb0830] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:27.066643  322139 system_pods.go:89] "kube-apiserver-embed-certs-412583" [7eee3d42-8f6d-4f15-8eb6-d6cb611f8904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:27.066649  322139 system_pods.go:89] "kube-controller-manager-embed-certs-412583" [e118b0d0-9dad-4c49-beb5-fa7d32814216] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:27.066653  322139 system_pods.go:89] "kube-proxy-wm7k2" [120a9b03-e7bf-4f4d-9b8c-6fa05d3739d9] Running
	I1123 09:58:27.066658  322139 system_pods.go:89] "kube-scheduler-embed-certs-412583" [dde2c2e0-b58a-4028-a671-1a8f577dd063] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:27.066662  322139 system_pods.go:89] "metrics-server-746fcd58dc-5bq5f" [856d4db7-3788-41a2-98d4-e61a5d997e43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:27.066667  322139 system_pods.go:89] "storage-provisioner" [dcf16920-e30b-42ab-8195-4ef946498d0f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:27.066674  322139 system_pods.go:126] duration metric: took 4.987876ms to wait for k8s-apps to be running ...
	I1123 09:58:27.066682  322139 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:58:27.066728  322139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:58:27.084505  322139 system_svc.go:56] duration metric: took 17.815139ms WaitForService to wait for kubelet
	I1123 09:58:27.084533  322139 kubeadm.go:587] duration metric: took 4.406240193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:27.084548  322139 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:58:27.088257  322139 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:58:27.088292  322139 node_conditions.go:123] node cpu capacity is 8
	I1123 09:58:27.088309  322139 node_conditions.go:105] duration metric: took 3.756078ms to run NodePressure ...
	I1123 09:58:27.088325  322139 start.go:242] waiting for startup goroutines ...
	I1123 09:58:27.088345  322139 start.go:247] waiting for cluster config update ...
	I1123 09:58:27.088359  322139 start.go:256] writing updated cluster config ...
	I1123 09:58:27.088712  322139 ssh_runner.go:195] Run: rm -f paused
	I1123 09:58:27.093478  322139 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:27.098111  322139 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8dgc7" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:58:29.104647  322139 pod_ready.go:104] pod "coredns-66bc5c9577-8dgc7" is not "Ready", error: <nil>
	I1123 09:58:27.010224  322309 addons.go:530] duration metric: took 4.049374736s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1123 09:58:27.015771  322309 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:58:27.015817  322309 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:58:27.501093  322309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:58:27.506121  322309 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:58:27.506153  322309 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:58:28.001522  322309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:58:28.006128  322309 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:58:28.008093  322309 api_server.go:141] control plane version: v1.34.1
	I1123 09:58:28.008128  322309 api_server.go:131] duration metric: took 1.007447817s to wait for apiserver health ...
	I1123 09:58:28.008140  322309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:58:28.012632  322309 system_pods.go:59] 9 kube-system pods found
	I1123 09:58:28.012693  322309 system_pods.go:61] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:28.012707  322309 system_pods.go:61] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:28.012732  322309 system_pods.go:61] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:28.012741  322309 system_pods.go:61] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:28.012753  322309 system_pods.go:61] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:28.012760  322309 system_pods.go:61] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:58:28.012765  322309 system_pods.go:61] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:28.012773  322309 system_pods.go:61] "metrics-server-746fcd58dc-gtpxg" [91f7dd1b-5d54-4720-9cd3-bd846b219cd8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:28.012782  322309 system_pods.go:61] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:28.012789  322309 system_pods.go:74] duration metric: took 4.643282ms to wait for pod list to return data ...
	I1123 09:58:28.012799  322309 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:58:28.015740  322309 default_sa.go:45] found service account: "default"
	I1123 09:58:28.015766  322309 default_sa.go:55] duration metric: took 2.958976ms for default service account to be created ...
	I1123 09:58:28.015776  322309 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:58:28.019218  322309 system_pods.go:86] 9 kube-system pods found
	I1123 09:58:28.019258  322309 system_pods.go:89] "coredns-66bc5c9577-sx25q" [50adb46a-6c29-465a-adba-f806eeef81aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:58:28.019271  322309 system_pods.go:89] "etcd-no-preload-309734" [debda9ed-65d8-4a7e-99a0-42943a3c0520] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:58:28.019282  322309 system_pods.go:89] "kindnet-d6zbp" [d1c56dde-7af0-49ca-a982-04ae56add5f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:58:28.019294  322309 system_pods.go:89] "kube-apiserver-no-preload-309734" [165ccf5d-2d0c-4395-b9e8-31308c188f74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:58:28.019302  322309 system_pods.go:89] "kube-controller-manager-no-preload-309734" [d70022cf-2aaa-45a7-bcb0-0563bf832d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:58:28.019311  322309 system_pods.go:89] "kube-proxy-jpvhc" [eb0ab966-23fc-429f-bcfe-eb5726b865be] Running
	I1123 09:58:28.019322  322309 system_pods.go:89] "kube-scheduler-no-preload-309734" [c1fac6cc-06b9-419d-b9e5-e99b01de4dd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:58:28.019386  322309 system_pods.go:89] "metrics-server-746fcd58dc-gtpxg" [91f7dd1b-5d54-4720-9cd3-bd846b219cd8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:58:28.019404  322309 system_pods.go:89] "storage-provisioner" [b1352952-5fff-47aa-af05-dd6b2078fa39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:58:28.019414  322309 system_pods.go:126] duration metric: took 3.631818ms to wait for k8s-apps to be running ...
	I1123 09:58:28.019427  322309 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:58:28.019480  322309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:58:28.039272  322309 system_svc.go:56] duration metric: took 19.836608ms WaitForService to wait for kubelet
	I1123 09:58:28.039305  322309 kubeadm.go:587] duration metric: took 5.078530615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:58:28.039348  322309 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:58:28.042824  322309 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:58:28.042860  322309 node_conditions.go:123] node cpu capacity is 8
	I1123 09:58:28.042880  322309 node_conditions.go:105] duration metric: took 3.526093ms to run NodePressure ...
	I1123 09:58:28.042895  322309 start.go:242] waiting for startup goroutines ...
	I1123 09:58:28.042906  322309 start.go:247] waiting for cluster config update ...
	I1123 09:58:28.042926  322309 start.go:256] writing updated cluster config ...
	I1123 09:58:28.043236  322309 ssh_runner.go:195] Run: rm -f paused
	I1123 09:58:28.048448  322309 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:58:28.054721  322309 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sx25q" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:58:30.061547  322309 pod_ready.go:104] pod "coredns-66bc5c9577-sx25q" is not "Ready", error: <nil>
	W1123 09:58:27.966936  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	W1123 09:58:29.967457  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	W1123 09:58:31.603962  322139 pod_ready.go:104] pod "coredns-66bc5c9577-8dgc7" is not "Ready", error: <nil>
	W1123 09:58:33.606764  322139 pod_ready.go:104] pod "coredns-66bc5c9577-8dgc7" is not "Ready", error: <nil>
	W1123 09:58:32.061673  322309 pod_ready.go:104] pod "coredns-66bc5c9577-sx25q" is not "Ready", error: <nil>
	W1123 09:58:34.063628  322309 pod_ready.go:104] pod "coredns-66bc5c9577-sx25q" is not "Ready", error: <nil>
	W1123 09:58:31.968056  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	W1123 09:58:33.973957  319511 pod_ready.go:104] pod "coredns-5dd5756b68-gf5sx" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	8447438246f63       56cc512116c8f       10 seconds ago      Running             busybox                   0                   e97d1ab2108e1       busybox                                                default
	f45b6674fee79       52546a367cc9e       15 seconds ago      Running             coredns                   0                   478a15b3e8809       coredns-66bc5c9577-49wlg                               kube-system
	88f6eeddc1856       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   d28e7710f13fc       storage-provisioner                                    kube-system
	02522085d67a4       409467f978b4a       27 seconds ago      Running             kindnet-cni               0                   9ce22c41aa99c       kindnet-kx2hw                                          kube-system
	62dd8f139861d       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   ff78308b78ac3       kube-proxy-q6wsc                                       kube-system
	c4ba281063cb0       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   adb1246cb4b28       kube-controller-manager-default-k8s-diff-port-696492   kube-system
	842222ab6c244       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   6fda8451f90ff       etcd-default-k8s-diff-port-696492                      kube-system
	52012eaf34144       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   a2ac0fa566c5d       kube-scheduler-default-k8s-diff-port-696492            kube-system
	260483ba1a152       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   33d780512464d       kube-apiserver-default-k8s-diff-port-696492            kube-system
	
	
	==> containerd <==
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.886977891Z" level=info msg="StartContainer for \"88f6eeddc18564a70f0c3c28d32fa11b88032e467a4769be8046cf8d399a116d\""
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.888256509Z" level=info msg="connecting to shim 88f6eeddc18564a70f0c3c28d32fa11b88032e467a4769be8046cf8d399a116d" address="unix:///run/containerd/s/21dc298e388a58283d9f7e9de3c335cc8020cd3253d7f00adc02472438f35f28" protocol=ttrpc version=3
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.891241530Z" level=info msg="CreateContainer within sandbox \"478a15b3e8809d0d0cde5ecc7b3ca9f7a11f14627d862d9f3680782ea53ee42d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.900248677Z" level=info msg="Container f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.908587231Z" level=info msg="CreateContainer within sandbox \"478a15b3e8809d0d0cde5ecc7b3ca9f7a11f14627d862d9f3680782ea53ee42d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594\""
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.909593531Z" level=info msg="StartContainer for \"f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594\""
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.910709483Z" level=info msg="connecting to shim f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594" address="unix:///run/containerd/s/fd72551db93b76b30e6e5e6c56cf734dfc4bebb23af37fa9336a8c2893ca7a72" protocol=ttrpc version=3
	Nov 23 09:58:20 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:20.941158498Z" level=info msg="StartContainer for \"88f6eeddc18564a70f0c3c28d32fa11b88032e467a4769be8046cf8d399a116d\" returns successfully"
	Nov 23 09:58:21 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:21.005600388Z" level=info msg="StartContainer for \"f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594\" returns successfully"
	Nov 23 09:58:24 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:24.143871334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330,Namespace:default,Attempt:0,}"
	Nov 23 09:58:24 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:24.202138909Z" level=info msg="connecting to shim e97d1ab2108e111925782798fb153a04508c1c587e92529beff66f3b24b7ef46" address="unix:///run/containerd/s/e1ef4feccc734ee6546949826a9ecabc0b203d14a3193efa1a36e4f1523566c3" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:58:24 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:24.319382624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330,Namespace:default,Attempt:0,} returns sandbox id \"e97d1ab2108e111925782798fb153a04508c1c587e92529beff66f3b24b7ef46\""
	Nov 23 09:58:24 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:24.325375359Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.513056507Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.514040258Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.515907363Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.518717877Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.519316855Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.193868575s"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.519382474Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.527056005Z" level=info msg="CreateContainer within sandbox \"e97d1ab2108e111925782798fb153a04508c1c587e92529beff66f3b24b7ef46\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.546635344Z" level=info msg="Container 8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.558968009Z" level=info msg="CreateContainer within sandbox \"e97d1ab2108e111925782798fb153a04508c1c587e92529beff66f3b24b7ef46\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03\""
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.560905545Z" level=info msg="StartContainer for \"8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03\""
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.565647935Z" level=info msg="connecting to shim 8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03" address="unix:///run/containerd/s/e1ef4feccc734ee6546949826a9ecabc0b203d14a3193efa1a36e4f1523566c3" protocol=ttrpc version=3
	Nov 23 09:58:26 default-k8s-diff-port-696492 containerd[664]: time="2025-11-23T09:58:26.690801107Z" level=info msg="StartContainer for \"8447438246f639df37b67de53a953c7f4e832ee623d3e1591ea833c548022b03\" returns successfully"
	
	
	==> coredns [f45b6674fee79d5f0ee76cd999de2d963f3455967a3a0f5e273a6278dd55b594] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60147 - 47991 "HINFO IN 7168823184494500575.1194822797604877992. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033141887s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-696492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-696492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-696492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_58_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:58:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-696492
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:58:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:58:33 +0000   Sun, 23 Nov 2025 09:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:58:33 +0000   Sun, 23 Nov 2025 09:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:58:33 +0000   Sun, 23 Nov 2025 09:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:58:33 +0000   Sun, 23 Nov 2025 09:58:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-696492
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c6439931-9547-4eff-a445-4b28dd7aea61
	  Boot ID:                    e4c4d39b-bebd-4037-9237-26b945dbe084
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-49wlg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-default-k8s-diff-port-696492                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-kx2hw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-696492             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-696492    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-q6wsc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-696492             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x7 over 39s)  kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node default-k8s-diff-port-696492 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node default-k8s-diff-port-696492 event: Registered Node default-k8s-diff-port-696492 in Controller
	  Normal  NodeReady                16s                kubelet          Node default-k8s-diff-port-696492 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.288463] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 09:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[Nov23 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.195562] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +5.912917] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 c0 1c 98 33 a9 08 06
	[  +0.000437] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e bd c3 0c c1 99 08 06
	[ +10.002091] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 47 bd bf 96 57 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 2b 39 eb 11 2b 08 06
	[  +4.460318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 85 b9 91 f8 a4 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 49 b3 20 41 43 08 06
	[  +2.904694] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	[Nov23 09:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 48 bf 8b d1 fc 08 06
	[  +0.000931] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 48 a2 4c da c6 08 06
	
	
	==> etcd [842222ab6c244214fb7ee6baeb300cef7642a0363f771b03d1a504ac99132070] <==
	{"level":"warn","ts":"2025-11-23T09:57:59.618227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.638239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.648857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.662594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.668479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.678088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.686705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.693977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.702587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.712188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.721797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.732823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.741907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.751410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.760428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.769703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.778978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.788950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.796949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.806234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.816850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.828714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.837858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.848154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:57:59.929442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50668","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:58:37 up 40 min,  0 user,  load average: 6.24, 4.55, 2.83
	Linux default-k8s-diff-port-696492 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [02522085d67a410254267ee219e6627961454b738df21c14c684ae238c0fe4b6] <==
	I1123 09:58:10.059807       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:58:10.060103       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:58:10.060363       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:58:10.060390       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:58:10.060422       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:58:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:58:10.358743       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:58:10.358912       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:58:10.358935       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:58:10.359166       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:58:10.839305       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:58:10.839358       1 metrics.go:72] Registering metrics
	I1123 09:58:10.839452       1 controller.go:711] "Syncing nftables rules"
	I1123 09:58:20.360635       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:58:20.360682       1 main.go:301] handling current node
	I1123 09:58:30.360250       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:58:30.360310       1 main.go:301] handling current node
	
	
	==> kube-apiserver [260483ba1a1523f842d7822582fa2c0eccb179009df5831d6ae999dcb45e74d0] <==
	I1123 09:58:00.611450       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:58:00.611457       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:58:00.611464       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:58:00.614872       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:58:00.620675       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:58:00.635605       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:58:00.657631       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:58:01.517017       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:58:01.522440       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:58:01.522467       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:58:02.419506       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:58:02.477910       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:58:02.574517       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:58:02.627077       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:58:02.646696       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 09:58:02.650246       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:58:02.657854       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:58:03.654632       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:58:03.670885       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:58:03.686658       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:58:07.728773       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:58:07.736964       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:58:08.326282       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 09:58:08.525001       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 09:58:32.974009       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:49718: use of closed network connection
	
	
	==> kube-controller-manager [c4ba281063cb08c4a19749761d1dafbb99802bd3aa3a7a50087abdb2e15455fd] <==
	I1123 09:58:07.533842       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 09:58:07.536236       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-696492" podCIDRs=["10.244.0.0/24"]
	I1123 09:58:07.551416       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:58:07.563875       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:58:07.572132       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:58:07.572173       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:58:07.572257       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:58:07.572261       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:58:07.572274       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:58:07.572717       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:58:07.572831       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:58:07.572905       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:58:07.573049       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:58:07.573954       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:58:07.574044       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:58:07.574804       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:58:07.574955       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:58:07.578104       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:58:07.579306       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:58:07.579326       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:58:07.580555       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:58:07.587320       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:58:07.593522       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:58:07.593612       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:58:22.524214       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [62dd8f139861d152370867f5755d14af5c5c3ef214c0e4c570ca082f5a3b25d7] <==
	I1123 09:58:09.567863       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:58:09.636549       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:58:09.736714       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:58:09.736757       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:58:09.736888       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:58:09.768239       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:58:09.768353       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:58:09.775207       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:58:09.775865       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:58:09.775907       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:58:09.777697       1 config.go:309] "Starting node config controller"
	I1123 09:58:09.777770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:58:09.777780       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:58:09.777998       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:58:09.778012       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:58:09.778021       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:58:09.778392       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:58:09.778918       1 config.go:200] "Starting service config controller"
	I1123 09:58:09.778940       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:58:09.879226       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:58:09.879236       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:58:09.880040       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [52012eaf341449ecd532cfe1abc80dc23366de525e1fd5c3c7cb1f9af315c852] <==
	E1123 09:58:00.585445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:58:00.585484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:58:00.585511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:58:00.585606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:58:00.585652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:58:00.585874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:58:01.455228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:58:01.480919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:58:01.503650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:58:01.542689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:58:01.662847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:58:01.752041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:58:01.785844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:58:01.824612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:58:01.829682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:58:01.844562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:58:01.857188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:58:01.861059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:58:01.870728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:58:01.877522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:58:01.890844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:58:01.891558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:58:01.937859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:58:01.994498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1123 09:58:03.479527       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412516    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45rc\" (UniqueName: \"kubernetes.io/projected/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-kube-api-access-l45rc\") pod \"kindnet-kx2hw\" (UID: \"1c3d2821-8e77-421a-8ccc-8d3d76d1380d\") " pod="kube-system/kindnet-kx2hw"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412560    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-kube-proxy\") pod \"kube-proxy-q6wsc\" (UID: \"ad2f26f5-ff1d-4acf-bea5-8ad34dc37130\") " pod="kube-system/kube-proxy-q6wsc"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412576    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2c2z\" (UniqueName: \"kubernetes.io/projected/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-kube-api-access-c2c2z\") pod \"kube-proxy-q6wsc\" (UID: \"ad2f26f5-ff1d-4acf-bea5-8ad34dc37130\") " pod="kube-system/kube-proxy-q6wsc"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412597    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-cni-cfg\") pod \"kindnet-kx2hw\" (UID: \"1c3d2821-8e77-421a-8ccc-8d3d76d1380d\") " pod="kube-system/kindnet-kx2hw"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412615    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-xtables-lock\") pod \"kindnet-kx2hw\" (UID: \"1c3d2821-8e77-421a-8ccc-8d3d76d1380d\") " pod="kube-system/kindnet-kx2hw"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412633    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-lib-modules\") pod \"kindnet-kx2hw\" (UID: \"1c3d2821-8e77-421a-8ccc-8d3d76d1380d\") " pod="kube-system/kindnet-kx2hw"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:08.412737    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-lib-modules\") pod \"kube-proxy-q6wsc\" (UID: \"ad2f26f5-ff1d-4acf-bea5-8ad34dc37130\") " pod="kube-system/kube-proxy-q6wsc"
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522177    1459 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522231    1459 projected.go:196] Error preparing data for projected volume kube-api-access-c2c2z for pod kube-system/kube-proxy-q6wsc: configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522183    1459 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522316    1459 projected.go:196] Error preparing data for projected volume kube-api-access-l45rc for pod kube-system/kindnet-kx2hw: configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522386    1459 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-kube-api-access-c2c2z podName:ad2f26f5-ff1d-4acf-bea5-8ad34dc37130 nodeName:}" failed. No retries permitted until 2025-11-23 09:58:09.022312027 +0000 UTC m=+5.615544873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c2c2z" (UniqueName: "kubernetes.io/projected/ad2f26f5-ff1d-4acf-bea5-8ad34dc37130-kube-api-access-c2c2z") pod "kube-proxy-q6wsc" (UID: "ad2f26f5-ff1d-4acf-bea5-8ad34dc37130") : configmap "kube-root-ca.crt" not found
	Nov 23 09:58:08 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:08.522420    1459 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-kube-api-access-l45rc podName:1c3d2821-8e77-421a-8ccc-8d3d76d1380d nodeName:}" failed. No retries permitted until 2025-11-23 09:58:09.022396574 +0000 UTC m=+5.615629419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l45rc" (UniqueName: "kubernetes.io/projected/1c3d2821-8e77-421a-8ccc-8d3d76d1380d-kube-api-access-l45rc") pod "kindnet-kx2hw" (UID: "1c3d2821-8e77-421a-8ccc-8d3d76d1380d") : configmap "kube-root-ca.crt" not found
	Nov 23 09:58:10 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:10.552412    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q6wsc" podStartSLOduration=2.552388347 podStartE2EDuration="2.552388347s" podCreationTimestamp="2025-11-23 09:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:58:10.552255516 +0000 UTC m=+7.145488365" watchObservedRunningTime="2025-11-23 09:58:10.552388347 +0000 UTC m=+7.145621252"
	Nov 23 09:58:10 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:10.729543    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kx2hw" podStartSLOduration=2.729516565 podStartE2EDuration="2.729516565s" podCreationTimestamp="2025-11-23 09:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:58:10.584486608 +0000 UTC m=+7.177719456" watchObservedRunningTime="2025-11-23 09:58:10.729516565 +0000 UTC m=+7.322749413"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.391108    1459 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.506719    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4786\" (UniqueName: \"kubernetes.io/projected/967d1f43-a5b7-4bf8-8111-c014f4b7594f-kube-api-access-r4786\") pod \"coredns-66bc5c9577-49wlg\" (UID: \"967d1f43-a5b7-4bf8-8111-c014f4b7594f\") " pod="kube-system/coredns-66bc5c9577-49wlg"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.506792    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7rd\" (UniqueName: \"kubernetes.io/projected/bbfe2e2e-e519-43f0-8575-91a152db45bf-kube-api-access-bc7rd\") pod \"storage-provisioner\" (UID: \"bbfe2e2e-e519-43f0-8575-91a152db45bf\") " pod="kube-system/storage-provisioner"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.506858    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/967d1f43-a5b7-4bf8-8111-c014f4b7594f-config-volume\") pod \"coredns-66bc5c9577-49wlg\" (UID: \"967d1f43-a5b7-4bf8-8111-c014f4b7594f\") " pod="kube-system/coredns-66bc5c9577-49wlg"
	Nov 23 09:58:20 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:20.506886    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbfe2e2e-e519-43f0-8575-91a152db45bf-tmp\") pod \"storage-provisioner\" (UID: \"bbfe2e2e-e519-43f0-8575-91a152db45bf\") " pod="kube-system/storage-provisioner"
	Nov 23 09:58:21 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:21.590940    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-49wlg" podStartSLOduration=13.590915197 podStartE2EDuration="13.590915197s" podCreationTimestamp="2025-11-23 09:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:58:21.590082134 +0000 UTC m=+18.183314984" watchObservedRunningTime="2025-11-23 09:58:21.590915197 +0000 UTC m=+18.184148045"
	Nov 23 09:58:21 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:21.627669    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.627626835000001 podStartE2EDuration="12.627626835s" podCreationTimestamp="2025-11-23 09:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:58:21.609573127 +0000 UTC m=+18.202805976" watchObservedRunningTime="2025-11-23 09:58:21.627626835 +0000 UTC m=+18.220859682"
	Nov 23 09:58:23 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:23.931886    1459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7lj4\" (UniqueName: \"kubernetes.io/projected/e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330-kube-api-access-j7lj4\") pod \"busybox\" (UID: \"e7cb3e3f-9c9d-4b5c-ae5d-efdfc6bb9330\") " pod="default/busybox"
	Nov 23 09:58:27 default-k8s-diff-port-696492 kubelet[1459]: I1123 09:58:27.640068    1459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.442575199 podStartE2EDuration="4.640045849s" podCreationTimestamp="2025-11-23 09:58:23 +0000 UTC" firstStartedPulling="2025-11-23 09:58:24.323602215 +0000 UTC m=+20.916835047" lastFinishedPulling="2025-11-23 09:58:26.521072867 +0000 UTC m=+23.114305697" observedRunningTime="2025-11-23 09:58:27.63967862 +0000 UTC m=+24.232911489" watchObservedRunningTime="2025-11-23 09:58:27.640045849 +0000 UTC m=+24.233278696"
	Nov 23 09:58:32 default-k8s-diff-port-696492 kubelet[1459]: E1123 09:58:32.973717    1459 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.85.2:41480->192.168.85.2:10010: write tcp 192.168.85.2:41480->192.168.85.2:10010: write: broken pipe
	
	
	==> storage-provisioner [88f6eeddc18564a70f0c3c28d32fa11b88032e467a4769be8046cf8d399a116d] <==
	I1123 09:58:20.967723       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:58:20.973297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:20.984499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:58:20.984770       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:58:20.985206       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b60e9482-d678-4958-8cff-3ab7d57cc846", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-696492_a77c9ea0-60d8-4e87-a0f2-4b293fa6d6a5 became leader
	I1123 09:58:20.985655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-696492_a77c9ea0-60d8-4e87-a0f2-4b293fa6d6a5!
	W1123 09:58:20.992574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:21.007436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:58:21.086621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-696492_a77c9ea0-60d8-4e87-a0f2-4b293fa6d6a5!
	W1123 09:58:23.012785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:23.019724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:25.024580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:25.030065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:27.034275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:27.042707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:29.047910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:29.052847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:31.057045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:31.063439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:33.068658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:33.077146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:35.081289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:35.088632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:37.093456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:58:37.102435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-696492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.34s)

                                                
                                    

Test pass (303/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 14.14
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.25
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 12.97
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.28
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.18
20 TestDownloadOnlyKic 0.47
21 TestBinaryMirror 0.87
22 TestOffline 58.92
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 124.8
29 TestAddons/serial/Volcano 39.24
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.51
35 TestAddons/parallel/Registry 15.4
36 TestAddons/parallel/RegistryCreds 0.72
37 TestAddons/parallel/Ingress 20.55
38 TestAddons/parallel/InspektorGadget 10.9
39 TestAddons/parallel/MetricsServer 5.95
41 TestAddons/parallel/CSI 60.91
42 TestAddons/parallel/Headlamp 18.61
43 TestAddons/parallel/CloudSpanner 5.55
44 TestAddons/parallel/LocalPath 54.78
45 TestAddons/parallel/NvidiaDevicePlugin 5.67
46 TestAddons/parallel/Yakd 10.74
47 TestAddons/parallel/AmdGpuDevicePlugin 5.53
48 TestAddons/StoppedEnableDisable 12.92
49 TestCertOptions 32.96
50 TestCertExpiration 215.81
52 TestForceSystemdFlag 34.38
53 TestForceSystemdEnv 28.71
54 TestDockerEnvContainerd 39.27
58 TestErrorSpam/setup 22.45
59 TestErrorSpam/start 0.71
60 TestErrorSpam/status 1.01
61 TestErrorSpam/pause 1.52
62 TestErrorSpam/unpause 1.61
63 TestErrorSpam/stop 2.14
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.31
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.3
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.8
75 TestFunctional/serial/CacheCmd/cache/add_local 2.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 41.51
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.33
86 TestFunctional/serial/LogsFileCmd 1.34
87 TestFunctional/serial/InvalidService 4.47
89 TestFunctional/parallel/ConfigCmd 0.52
90 TestFunctional/parallel/DashboardCmd 8.95
91 TestFunctional/parallel/DryRun 0.49
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.08
97 TestFunctional/parallel/ServiceCmdConnect 11.58
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 32.6
101 TestFunctional/parallel/SSHCmd 0.74
102 TestFunctional/parallel/CpCmd 2.03
103 TestFunctional/parallel/MySQL 21.5
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.05
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
113 TestFunctional/parallel/License 0.46
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.31
119 TestFunctional/parallel/ServiceCmd/DeployApp 11.15
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.46
128 TestFunctional/parallel/ServiceCmd/List 0.59
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
130 TestFunctional/parallel/MountCmd/any-port 9.43
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
133 TestFunctional/parallel/ServiceCmd/Format 0.43
134 TestFunctional/parallel/ServiceCmd/URL 0.44
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 0.53
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
141 TestFunctional/parallel/ImageCommands/ImageBuild 4.8
142 TestFunctional/parallel/ImageCommands/Setup 2
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.14
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.22
146 TestFunctional/parallel/MountCmd/specific-port 1.76
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
150 TestFunctional/parallel/MountCmd/VerifyCleanup 2.27
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 144.5
163 TestMultiControlPlane/serial/DeployApp 6
164 TestMultiControlPlane/serial/PingHostFromPods 1.23
165 TestMultiControlPlane/serial/AddWorkerNode 25.17
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.98
168 TestMultiControlPlane/serial/CopyFile 18.81
169 TestMultiControlPlane/serial/StopSecondaryNode 12.85
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.1
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.97
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 97.4
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.62
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
176 TestMultiControlPlane/serial/StopCluster 36.33
177 TestMultiControlPlane/serial/RestartCluster 54.27
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 38.48
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.97
185 TestJSONOutput/start/Command 38.49
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.7
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.64
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.98
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 36.6
211 TestKicCustomNetwork/use_default_bridge_network 23.59
212 TestKicExistingNetwork 24.8
213 TestKicCustomSubnet 27.76
214 TestKicStaticIP 28.74
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 52.34
219 TestMountStart/serial/StartWithMountFirst 7.67
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 4.7
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.74
224 TestMountStart/serial/VerifyMountPostDelete 0.29
225 TestMountStart/serial/Stop 1.27
226 TestMountStart/serial/RestartStopped 7.6
227 TestMountStart/serial/VerifyMountPostStop 0.3
230 TestMultiNode/serial/FreshStart2Nodes 97.99
231 TestMultiNode/serial/DeployApp2Nodes 4.97
232 TestMultiNode/serial/PingHostFrom2Pods 0.84
233 TestMultiNode/serial/AddNode 26.92
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.66
237 TestMultiNode/serial/StopNode 2.36
238 TestMultiNode/serial/StartAfterStop 7.1
239 TestMultiNode/serial/RestartKeepsNodes 68.9
240 TestMultiNode/serial/DeleteNode 5.39
241 TestMultiNode/serial/StopMultiNode 24.16
242 TestMultiNode/serial/RestartMultiNode 47.83
243 TestMultiNode/serial/ValidateNameConflict 27.03
248 TestPreload 116.63
250 TestScheduledStopUnix 99.59
253 TestInsufficientStorage 9.65
254 TestRunningBinaryUpgrade 57.62
256 TestKubernetesUpgrade 149.12
257 TestMissingContainerUpgrade 102.83
259 TestStoppedBinaryUpgrade/Setup 2.83
260 TestPause/serial/Start 56.24
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 35.48
264 TestStoppedBinaryUpgrade/Upgrade 105.57
265 TestNoKubernetes/serial/StartWithStopK8s 11.41
266 TestNoKubernetes/serial/Start 5.3
267 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
269 TestNoKubernetes/serial/ProfileList 1.75
270 TestNoKubernetes/serial/Stop 3.5
271 TestPause/serial/SecondStartNoReconfiguration 7.71
272 TestNoKubernetes/serial/StartNoArgs 7.32
280 TestNetworkPlugins/group/false 4.43
281 TestPause/serial/Pause 0.8
282 TestPause/serial/VerifyStatus 0.39
283 TestPause/serial/Unpause 0.75
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
285 TestPause/serial/PauseAgain 0.85
286 TestPause/serial/DeletePaused 3.93
290 TestPause/serial/VerifyDeletedResources 2.58
291 TestStoppedBinaryUpgrade/MinikubeLogs 1.37
299 TestNetworkPlugins/group/auto/Start 46.87
300 TestNetworkPlugins/group/auto/KubeletFlags 0.34
301 TestNetworkPlugins/group/auto/NetCatPod 9.24
302 TestNetworkPlugins/group/kindnet/Start 39.25
303 TestNetworkPlugins/group/auto/DNS 0.14
304 TestNetworkPlugins/group/auto/Localhost 0.12
305 TestNetworkPlugins/group/auto/HairPin 0.12
306 TestNetworkPlugins/group/calico/Start 58.73
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
309 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
310 TestNetworkPlugins/group/kindnet/DNS 0.15
311 TestNetworkPlugins/group/kindnet/Localhost 0.14
312 TestNetworkPlugins/group/kindnet/HairPin 0.13
313 TestNetworkPlugins/group/custom-flannel/Start 62.25
314 TestNetworkPlugins/group/enable-default-cni/Start 62.62
315 TestNetworkPlugins/group/flannel/Start 54.46
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/calico/KubeletFlags 0.35
318 TestNetworkPlugins/group/calico/NetCatPod 8.22
319 TestNetworkPlugins/group/calico/DNS 0.21
320 TestNetworkPlugins/group/calico/Localhost 0.17
321 TestNetworkPlugins/group/calico/HairPin 0.15
322 TestNetworkPlugins/group/bridge/Start 63.66
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
325 TestNetworkPlugins/group/flannel/ControllerPod 6.01
326 TestNetworkPlugins/group/custom-flannel/DNS 0.16
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.23
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
332 TestNetworkPlugins/group/flannel/NetCatPod 8.28
333 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
334 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
335 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
336 TestNetworkPlugins/group/flannel/DNS 0.15
337 TestNetworkPlugins/group/flannel/Localhost 0.13
338 TestNetworkPlugins/group/flannel/HairPin 0.12
340 TestStartStop/group/old-k8s-version/serial/FirstStart 56.49
342 TestStartStop/group/no-preload/serial/FirstStart 57.41
344 TestStartStop/group/embed-certs/serial/FirstStart 48.52
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
346 TestNetworkPlugins/group/bridge/NetCatPod 9.26
347 TestNetworkPlugins/group/bridge/DNS 0.25
348 TestNetworkPlugins/group/bridge/Localhost 0.24
349 TestNetworkPlugins/group/bridge/HairPin 0.23
352 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.41
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.03
356 TestStartStop/group/old-k8s-version/serial/Stop 12.19
357 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
359 TestStartStop/group/embed-certs/serial/Stop 12.25
360 TestStartStop/group/no-preload/serial/Stop 12.33
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
362 TestStartStop/group/old-k8s-version/serial/SecondStart 47.21
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
364 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
365 TestStartStop/group/embed-certs/serial/SecondStart 48.24
366 TestStartStop/group/no-preload/serial/SecondStart 47.93
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
369 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.18
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
371 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 43.42
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
374 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
377 TestStartStop/group/old-k8s-version/serial/Pause 3.29
378 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
379 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
381 TestStartStop/group/newest-cni/serial/FirstStart 28.63
382 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
383 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
384 TestStartStop/group/no-preload/serial/Pause 3.9
385 TestStartStop/group/embed-certs/serial/Pause 3.87
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/newest-cni/serial/DeployApp 0
388 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.82
389 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
390 TestStartStop/group/newest-cni/serial/Stop 1.31
391 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
392 TestStartStop/group/newest-cni/serial/SecondStart 11.06
393 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
394 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.04
395 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
398 TestStartStop/group/newest-cni/serial/Pause 2.91
x
+
TestDownloadOnly/v1.28.0/json-events (14.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-607785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-607785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.138514406s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (14.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 09:20:02.928646    7109 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1123 09:20:02.928736    7109 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-607785
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-607785: exit status 85 (83.866607ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-607785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-607785 │ jenkins │ v1.37.0 │ 23 Nov 25 09:19 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:19:48
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:19:48.846821    7121 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:19:48.847059    7121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:19:48.847068    7121 out.go:374] Setting ErrFile to fd 2...
	I1123 09:19:48.847073    7121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:19:48.847258    7121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	W1123 09:19:48.847393    7121 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21968-3552/.minikube/config/config.json: open /home/jenkins/minikube-integration/21968-3552/.minikube/config/config.json: no such file or directory
	I1123 09:19:48.847866    7121 out.go:368] Setting JSON to true
	I1123 09:19:48.848747    7121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":128,"bootTime":1763889461,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:19:48.848812    7121 start.go:143] virtualization: kvm guest
	I1123 09:19:48.853740    7121 out.go:99] [download-only-607785] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1123 09:19:48.853914    7121 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 09:19:48.853947    7121 notify.go:221] Checking for updates...
	I1123 09:19:48.855696    7121 out.go:171] MINIKUBE_LOCATION=21968
	I1123 09:19:48.857686    7121 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:19:48.859493    7121 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:19:48.861080    7121 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:19:48.862620    7121 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 09:19:48.865256    7121 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 09:19:48.865530    7121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:19:48.891960    7121 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:19:48.892090    7121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:19:49.362279    7121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 09:19:49.350429089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:19:49.362460    7121 docker.go:319] overlay module found
	I1123 09:19:49.364351    7121 out.go:99] Using the docker driver based on user configuration
	I1123 09:19:49.364409    7121 start.go:309] selected driver: docker
	I1123 09:19:49.364418    7121 start.go:927] validating driver "docker" against <nil>
	I1123 09:19:49.364518    7121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:19:49.431525    7121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 09:19:49.422015031 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:19:49.431686    7121 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:19:49.432234    7121 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 09:19:49.432446    7121 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 09:19:49.434522    7121 out.go:171] Using Docker driver with root privileges
	I1123 09:19:49.435880    7121 cni.go:84] Creating CNI manager for ""
	I1123 09:19:49.435947    7121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:19:49.435960    7121 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:19:49.436031    7121 start.go:353] cluster config:
	{Name:download-only-607785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-607785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:19:49.437589    7121 out.go:99] Starting "download-only-607785" primary control-plane node in "download-only-607785" cluster
	I1123 09:19:49.437606    7121 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:19:49.439288    7121 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:19:49.439353    7121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 09:19:49.439488    7121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:19:49.457277    7121 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 09:19:49.457577    7121 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 09:19:49.457720    7121 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 09:19:49.535513    7121 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 09:19:49.535562    7121 cache.go:65] Caching tarball of preloaded images
	I1123 09:19:49.535726    7121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 09:19:49.538044    7121 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 09:19:49.538074    7121 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1123 09:19:49.644165    7121 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1123 09:19:49.644291    7121 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-607785 host does not exist
	  To start a cluster, run: "minikube start -p download-only-607785"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-607785
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-154829 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-154829 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.968837018s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 09:20:16.388419    7109 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1123 09:20:16.388466    7109 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-154829
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-154829: exit status 85 (99.978334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-607785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-607785 │ jenkins │ v1.37.0 │ 23 Nov 25 09:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │ 23 Nov 25 09:20 UTC │
	│ delete  │ -p download-only-607785                                                                                                                                                               │ download-only-607785 │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │ 23 Nov 25 09:20 UTC │
	│ start   │ -o=json --download-only -p download-only-154829 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-154829 │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:20:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:20:03.475060    7509 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:20:03.475326    7509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:20:03.475352    7509 out.go:374] Setting ErrFile to fd 2...
	I1123 09:20:03.475356    7509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:20:03.475588    7509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:20:03.476104    7509 out.go:368] Setting JSON to true
	I1123 09:20:03.476941    7509 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":142,"bootTime":1763889461,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:20:03.476998    7509 start.go:143] virtualization: kvm guest
	I1123 09:20:03.479386    7509 out.go:99] [download-only-154829] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:20:03.479598    7509 notify.go:221] Checking for updates...
	I1123 09:20:03.481402    7509 out.go:171] MINIKUBE_LOCATION=21968
	I1123 09:20:03.483040    7509 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:20:03.485176    7509 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:20:03.486885    7509 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:20:03.488534    7509 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 09:20:03.491629    7509 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 09:20:03.491903    7509 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:20:03.517023    7509 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:20:03.517126    7509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:20:03.581136    7509 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 09:20:03.570906783 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:20:03.581246    7509 docker.go:319] overlay module found
	I1123 09:20:03.583447    7509 out.go:99] Using the docker driver based on user configuration
	I1123 09:20:03.583484    7509 start.go:309] selected driver: docker
	I1123 09:20:03.583490    7509 start.go:927] validating driver "docker" against <nil>
	I1123 09:20:03.583651    7509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:20:03.643135    7509 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 09:20:03.633008695 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:20:03.643271    7509 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:20:03.643809    7509 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 09:20:03.643943    7509 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 09:20:03.646023    7509 out.go:171] Using Docker driver with root privileges
	I1123 09:20:03.647685    7509 cni.go:84] Creating CNI manager for ""
	I1123 09:20:03.647751    7509 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:20:03.647762    7509 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:20:03.647835    7509 start.go:353] cluster config:
	{Name:download-only-154829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-154829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:20:03.649309    7509 out.go:99] Starting "download-only-154829" primary control-plane node in "download-only-154829" cluster
	I1123 09:20:03.649327    7509 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:20:03.650643    7509 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:20:03.650685    7509 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:20:03.650766    7509 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:20:03.668556    7509 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 09:20:03.668730    7509 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 09:20:03.668754    7509 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 09:20:03.668759    7509 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 09:20:03.668768    7509 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 09:20:03.829104    7509 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 09:20:03.829162    7509 cache.go:65] Caching tarball of preloaded images
	I1123 09:20:03.829370    7509 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:20:03.831502    7509 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 09:20:03.831522    7509 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1123 09:20:03.936892    7509 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1123 09:20:03.936937    7509 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21968-3552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-154829 host does not exist
	  To start a cluster, run: "minikube start -p download-only-154829"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-154829
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.18s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.47s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-909522 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-909522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-909522
--- PASS: TestDownloadOnlyKic (0.47s)

                                                
                                    
x
+
TestBinaryMirror (0.87s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 09:20:17.759141    7109 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-481264 --alsologtostderr --binary-mirror http://127.0.0.1:46423 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-481264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-481264
--- PASS: TestBinaryMirror (0.87s)

                                                
                                    
x
+
TestOffline (58.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-864162 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-864162 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (55.647452332s)
helpers_test.go:175: Cleaning up "offline-containerd-864162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-864162
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-864162: (3.267858967s)
--- PASS: TestOffline (58.92s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-300235
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-300235: exit status 85 (68.117958ms)

                                                
                                                
-- stdout --
	* Profile "addons-300235" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-300235"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-300235
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-300235: exit status 85 (67.324274ms)

                                                
                                                
-- stdout --
	* Profile "addons-300235" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-300235"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (124.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-300235 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-300235 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.803010108s)
--- PASS: TestAddons/Setup (124.80s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 13.653634ms
addons_test.go:868: volcano-scheduler stabilized in 13.718324ms
addons_test.go:884: volcano-controller stabilized in 13.96389ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-zbnm4" [ba62d50f-95bb-4e10-aa63-e49f6cf4cc26] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004120174s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-wvtpf" [ff736834-3280-48c0-b164-72fcb9786728] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004213611s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-hmpgq" [75c5bebb-9298-4c1d-876a-4366ac92b132] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003521896s
addons_test.go:903: (dbg) Run:  kubectl --context addons-300235 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-300235 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-300235 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [d4cfbf88-fee4-457d-924a-8b2575467df3] Pending
helpers_test.go:352: "test-job-nginx-0" [d4cfbf88-fee4-457d-924a-8b2575467df3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [d4cfbf88-fee4-457d-924a-8b2575467df3] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004153016s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300235 addons disable volcano --alsologtostderr -v=1: (11.813282004s)
--- PASS: TestAddons/serial/Volcano (39.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-300235 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-300235 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-300235 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-300235 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [62f164b7-9181-4566-a5ca-527919ecc96a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [62f164b7-9181-4566-a5ca-527919ecc96a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003289403s
addons_test.go:694: (dbg) Run:  kubectl --context addons-300235 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-300235 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-300235 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.87301ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-hnmtz" [ac2f70ea-c3e8-455e-9fcb-ba00e2a55954] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004017988s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cklcv" [628735c8-bea1-4539-8ebe-4ac78ec4269d] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004120367s
addons_test.go:392: (dbg) Run:  kubectl --context addons-300235 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-300235 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-300235 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.531865015s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 ip
2025/11/23 09:23:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.40s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.908505ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-300235
addons_test.go:332: (dbg) Run:  kubectl --context addons-300235 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-300235 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-300235 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-300235 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f6997072-6a63-41bd-bfd7-fcf7dd589d60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f6997072-6a63-41bd-bfd7-fcf7dd589d60] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003352478s
I1123 09:23:38.846025    7109 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-300235 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300235 addons disable ingress-dns --alsologtostderr -v=1: (1.142512919s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300235 addons disable ingress --alsologtostderr -v=1: (7.929714234s)
--- PASS: TestAddons/parallel/Ingress (20.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-pmg2g" [f8aa30c5-3d2b-43ab-a33f-c1231fd5adcc] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003811446s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300235 addons disable inspektor-gadget --alsologtostderr -v=1: (5.897106537s)
--- PASS: TestAddons/parallel/InspektorGadget (10.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.36572ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-drjqr" [6f29afcc-c19a-4ab8-9409-66b9aba4a8ba] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003980837s
addons_test.go:463: (dbg) Run:  kubectl --context addons-300235 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 09:23:48.957616    7109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 09:23:48.961707    7109 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 09:23:48.961728    7109 kapi.go:107] duration metric: took 4.128444ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.138393ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-300235 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-300235 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [bdacb9df-64d0-40eb-a574-df795fbcd8c3] Pending
helpers_test.go:352: "task-pv-pod" [bdacb9df-64d0-40eb-a574-df795fbcd8c3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [bdacb9df-64d0-40eb-a574-df795fbcd8c3] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00366818s
addons_test.go:572: (dbg) Run:  kubectl --context addons-300235 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-300235 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-300235 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-300235 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-300235 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-300235 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-300235 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [2a7c25ab-ae96-4b7f-992d-76e387260666] Pending
helpers_test.go:352: "task-pv-pod-restore" [2a7c25ab-ae96-4b7f-992d-76e387260666] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [2a7c25ab-ae96-4b7f-992d-76e387260666] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003370348s
addons_test.go:614: (dbg) Run:  kubectl --context addons-300235 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-300235 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-300235 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300235 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.622105048s)
--- PASS: TestAddons/parallel/CSI (60.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-300235 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-5l4zx" [6a82a158-05f8-42cf-95f6-7bb0ecb37e19] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-5l4zx" [6a82a158-05f8-42cf-95f6-7bb0ecb37e19] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003775049s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300235 addons disable headlamp --alsologtostderr -v=1: (5.767523414s)
--- PASS: TestAddons/parallel/Headlamp (18.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-f2jz9" [a7f644a4-808e-4a86-a6dd-099dcada5f9b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002659291s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-300235 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-300235 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300235 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [132acc68-d870-42f8-b9ed-5e5e7c492bd0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [132acc68-d870-42f8-b9ed-5e5e7c492bd0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [132acc68-d870-42f8-b9ed-5e5e7c492bd0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003242888s
addons_test.go:967: (dbg) Run:  kubectl --context addons-300235 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 ssh "cat /opt/local-path-provisioner/pvc-ba9e722d-64a5-4a73-b69c-5baf0169e5c3_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-300235 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-300235 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300235 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.80852549s)
--- PASS: TestAddons/parallel/LocalPath (54.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5fgqm" [b96bc3d8-3c89-42e3-89e2-5857814b0758] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003705504s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-cjpzw" [918f02dd-e4f0-48c9-87e5-74a31f8a0cb8] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003289602s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300235 addons disable yakd --alsologtostderr -v=1: (5.737807927s)
--- PASS: TestAddons/parallel/Yakd (10.74s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-njksg" [f60f6db1-5db0-498b-9c49-c1bcd7414177] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004802389s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300235 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-300235
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-300235: (12.607977058s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-300235
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-300235
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-300235
--- PASS: TestAddons/StoppedEnableDisable (12.92s)

                                                
                                    
x
+
TestCertOptions (32.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-759668 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-759668 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (30.089322213s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-759668 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-759668 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-759668 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-759668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-759668
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-759668: (2.093531586s)
--- PASS: TestCertOptions (32.96s)

                                                
                                    
x
+
TestCertExpiration (215.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-265268 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-265268 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (25.51654415s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-265268 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-265268 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.696409841s)
helpers_test.go:175: Cleaning up "cert-expiration-265268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-265268
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-265268: (2.599661259s)
--- PASS: TestCertExpiration (215.81s)

                                                
                                    
x
+
TestForceSystemdFlag (34.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-368801 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-368801 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.88177697s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-368801 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-368801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-368801
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-368801: (2.126256765s)
--- PASS: TestForceSystemdFlag (34.38s)

                                                
                                    
x
+
TestForceSystemdEnv (28.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-167424 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-167424 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.100944385s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-167424 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-167424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-167424
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-167424: (2.222990245s)
--- PASS: TestForceSystemdEnv (28.71s)

                                                
                                    
x
+
TestDockerEnvContainerd (39.27s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-326128 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-326128 --driver=docker  --container-runtime=containerd: (22.76543604s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-326128"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-326128": (1.036482396s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXf9lCC9/agent.31372" SSH_AGENT_PID="31373" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXf9lCC9/agent.31372" SSH_AGENT_PID="31373" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXf9lCC9/agent.31372" SSH_AGENT_PID="31373" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.066989372s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXf9lCC9/agent.31372" SSH_AGENT_PID="31373" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-326128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-326128
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-326128: (2.394057068s)
--- PASS: TestDockerEnvContainerd (39.27s)

                                                
                                    
x
+
TestErrorSpam/setup (22.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-135887 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-135887 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-135887 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-135887 --driver=docker  --container-runtime=containerd: (22.447524078s)
--- PASS: TestErrorSpam/setup (22.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (2.14s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 stop: (1.914930521s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-135887 --log_dir /tmp/nospam-135887 stop
--- PASS: TestErrorSpam/stop (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21968-3552/.minikube/files/etc/test/nested/copy/7109/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776058 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-776058 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (38.310972646s)
--- PASS: TestFunctional/serial/StartWithProxy (38.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 09:27:00.495642    7109 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776058 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-776058 --alsologtostderr -v=8: (6.300324567s)
functional_test.go:678: soft start took 6.301062216s for "functional-776058" cluster.
I1123 09:27:06.796321    7109 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-776058 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-776058 /tmp/TestFunctionalserialCacheCmdcacheadd_local1957065952/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 cache add minikube-local-cache-test:functional-776058
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-776058 cache add minikube-local-cache-test:functional-776058: (1.663536512s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 cache delete minikube-local-cache-test:functional-776058
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-776058
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.58843ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 kubectl -- --context functional-776058 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-776058 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776058 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 09:27:23.506577    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:23.513016    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:23.524488    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:23.545894    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:23.587397    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:23.668896    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:23.830421    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:24.152103    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:24.794146    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:26.075728    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:28.638638    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:33.760894    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:44.003172    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-776058 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.507100306s)
functional_test.go:776: restart took 41.507219234s for "functional-776058" cluster.
I1123 09:27:55.827544    7109 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-776058 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-776058 logs: (1.325045073s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 logs --file /tmp/TestFunctionalserialLogsFileCmd3951342030/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-776058 logs --file /tmp/TestFunctionalserialLogsFileCmd3951342030/001/logs.txt: (1.342849268s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-776058 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-776058
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-776058: exit status 115 (370.829417ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31817 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-776058 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 config get cpus: exit status 14 (116.635453ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 config get cpus: exit status 14 (91.106158ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-776058 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-776058 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 51698: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-776058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (199.528343ms)

                                                
                                                
-- stdout --
	* [functional-776058] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:28:18.284815   50707 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:28:18.285080   50707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:28:18.285091   50707 out.go:374] Setting ErrFile to fd 2...
	I1123 09:28:18.285095   50707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:28:18.285299   50707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:28:18.285931   50707 out.go:368] Setting JSON to false
	I1123 09:28:18.287020   50707 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":637,"bootTime":1763889461,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:28:18.287101   50707 start.go:143] virtualization: kvm guest
	I1123 09:28:18.289427   50707 out.go:179] * [functional-776058] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:28:18.291389   50707 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:28:18.291376   50707 notify.go:221] Checking for updates...
	I1123 09:28:18.294686   50707 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:28:18.296222   50707 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:28:18.297794   50707 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:28:18.299411   50707 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:28:18.300913   50707 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:28:18.303152   50707 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:28:18.303985   50707 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:28:18.335376   50707 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:28:18.335561   50707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:28:18.402480   50707 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 09:28:18.389973061 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:28:18.402678   50707 docker.go:319] overlay module found
	I1123 09:28:18.404586   50707 out.go:179] * Using the docker driver based on existing profile
	I1123 09:28:18.405773   50707 start.go:309] selected driver: docker
	I1123 09:28:18.405795   50707 start.go:927] validating driver "docker" against &{Name:functional-776058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-776058 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:28:18.405925   50707 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:28:18.408166   50707 out.go:203] 
	W1123 09:28:18.410431   50707 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 09:28:18.411844   50707 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776058 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-776058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (216.139159ms)

                                                
                                                
-- stdout --
	* [functional-776058] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:28:18.074960   50469 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:28:18.075235   50469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:28:18.075245   50469 out.go:374] Setting ErrFile to fd 2...
	I1123 09:28:18.075252   50469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:28:18.075735   50469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:28:18.076357   50469 out.go:368] Setting JSON to false
	I1123 09:28:18.077731   50469 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":637,"bootTime":1763889461,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:28:18.077825   50469 start.go:143] virtualization: kvm guest
	I1123 09:28:18.080237   50469 out.go:179] * [functional-776058] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1123 09:28:18.082289   50469 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:28:18.082291   50469 notify.go:221] Checking for updates...
	I1123 09:28:18.085666   50469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:28:18.087075   50469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:28:18.088550   50469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:28:18.089932   50469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:28:18.091218   50469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:28:18.093997   50469 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:28:18.094734   50469 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:28:18.122691   50469 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:28:18.122790   50469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:28:18.198793   50469 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 09:28:18.186236607 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:28:18.198918   50469 docker.go:319] overlay module found
	I1123 09:28:18.201432   50469 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 09:28:18.205366   50469 start.go:309] selected driver: docker
	I1123 09:28:18.205393   50469 start.go:927] validating driver "docker" against &{Name:functional-776058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-776058 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:28:18.205531   50469 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:28:18.207724   50469 out.go:203] 
	W1123 09:28:18.210847   50469 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 09:28:18.212403   50469 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-776058 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-776058 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-fbt9l" [e843bd3b-ba09-4a94-a8fd-3b32d8c2bcbc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-fbt9l" [e843bd3b-ba09-4a94-a8fd-3b32d8c2bcbc] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003840818s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31403
functional_test.go:1680: http://192.168.49.2:31403: success! body:
Request served by hello-node-connect-7d85dfc575-fbt9l

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31403
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [270c1ba4-ceb6-48e2-ad04-81f5bf12968d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004057876s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-776058 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-776058 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-776058 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-776058 apply -f testdata/storage-provisioner/pod.yaml
I1123 09:28:10.172241    7109 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fcf820b9-f86c-4b2f-9c52-95db6ac4453e] Pending
helpers_test.go:352: "sp-pod" [fcf820b9-f86c-4b2f-9c52-95db6ac4453e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [fcf820b9-f86c-4b2f-9c52-95db6ac4453e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.122280244s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-776058 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-776058 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-776058 apply -f testdata/storage-provisioner/pod.yaml
I1123 09:28:23.259706    7109 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [aa4306e0-abc7-478a-a204-3583352fbf29] Pending
helpers_test.go:352: "sp-pod" [aa4306e0-abc7-478a-a204-3583352fbf29] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [aa4306e0-abc7-478a-a204-3583352fbf29] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003719147s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-776058 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh -n functional-776058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 cp functional-776058:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4181410390/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh -n functional-776058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh -n functional-776058 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-776058 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-7crhj" [7c9a6b0c-5c79-4906-b8ed-28a9bd3f3330] Pending
helpers_test.go:352: "mysql-5bb876957f-7crhj" [7c9a6b0c-5c79-4906-b8ed-28a9bd3f3330] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-7crhj" [7c9a6b0c-5c79-4906-b8ed-28a9bd3f3330] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.004064119s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-776058 exec mysql-5bb876957f-7crhj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-776058 exec mysql-5bb876957f-7crhj -- mysql -ppassword -e "show databases;": exit status 1 (141.605202ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 09:28:44.431463    7109 retry.go:31] will retry after 1.040238281s: exit status 1
E1123 09:28:45.447612    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-776058 exec mysql-5bb876957f-7crhj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-776058 exec mysql-5bb876957f-7crhj -- mysql -ppassword -e "show databases;": exit status 1 (115.277926ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 09:28:45.587480    7109 retry.go:31] will retry after 1.847046307s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-776058 exec mysql-5bb876957f-7crhj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-776058 exec mysql-5bb876957f-7crhj -- mysql -ppassword -e "show databases;": exit status 1 (103.856844ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 09:28:47.539577    7109 retry.go:31] will retry after 1.833582903s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-776058 exec mysql-5bb876957f-7crhj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7109/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo cat /etc/test/nested/copy/7109/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7109.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo cat /etc/ssl/certs/7109.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7109.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo cat /usr/share/ca-certificates/7109.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/71092.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo cat /etc/ssl/certs/71092.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/71092.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo cat /usr/share/ca-certificates/71092.pem"
E1123 09:28:04.485509    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-776058 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 ssh "sudo systemctl is-active docker": exit status 1 (315.303533ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 ssh "sudo systemctl is-active crio": exit status 1 (316.584759ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-776058 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-776058 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-776058 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 47395: os: process already finished
helpers_test.go:519: unable to terminate pid 46968: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-776058 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-776058 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-776058 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [3253689f-7022-4027-bd77-b9ef0f0e5cc7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [3253689f-7022-4027-bd77-b9ef0f0e5cc7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.05378898s
I1123 09:28:15.351156    7109 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-776058 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-776058 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6sqcw" [51be1813-5ea0-4cb5-bce1-72cec4104db5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-6sqcw" [51be1813-5ea0-4cb5-bce1-72cec4104db5] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004703875s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-776058 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.242.224 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-776058 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "374.902478ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "80.83396ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "434.715522ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.776431ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdany-port1258012015/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763890096669655320" to /tmp/TestFunctionalparallelMountCmdany-port1258012015/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763890096669655320" to /tmp/TestFunctionalparallelMountCmdany-port1258012015/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763890096669655320" to /tmp/TestFunctionalparallelMountCmdany-port1258012015/001/test-1763890096669655320
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (350.431412ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:28:17.020392    7109 retry.go:31] will retry after 688.158619ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 09:28 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 09:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 09:28 test-1763890096669655320
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh cat /mount-9p/test-1763890096669655320
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-776058 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [a92b889e-4d61-4bd2-9ab2-3d7a080e7867] Pending
helpers_test.go:352: "busybox-mount" [a92b889e-4d61-4bd2-9ab2-3d7a080e7867] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [a92b889e-4d61-4bd2-9ab2-3d7a080e7867] Running
helpers_test.go:352: "busybox-mount" [a92b889e-4d61-4bd2-9ab2-3d7a080e7867] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [a92b889e-4d61-4bd2-9ab2-3d7a080e7867] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003658667s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-776058 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdany-port1258012015/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 service list -o json
functional_test.go:1504: Took "540.391588ms" to run "out/minikube-linux-amd64 -p functional-776058 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30617
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30617
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-776058 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-776058
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-776058
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776058 image ls --format short --alsologtostderr:
I1123 09:28:29.896760   55305 out.go:360] Setting OutFile to fd 1 ...
I1123 09:28:29.896954   55305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:29.896967   55305 out.go:374] Setting ErrFile to fd 2...
I1123 09:28:29.896975   55305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:29.897300   55305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
I1123 09:28:29.898096   55305 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:29.898238   55305 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:29.898786   55305 cli_runner.go:164] Run: docker container inspect functional-776058 --format={{.State.Status}}
I1123 09:28:29.922402   55305 ssh_runner.go:195] Run: systemctl --version
I1123 09:28:29.922500   55305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-776058
I1123 09:28:29.945177   55305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/functional-776058/id_rsa Username:docker}
I1123 09:28:30.050567   55305 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-776058 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ docker.io/kicbase/echo-server               │ functional-776058  │ sha256:9056ab │ 2.37MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/minikube-local-cache-test │ functional-776058  │ sha256:3deee6 │ 990B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776058 image ls --format table --alsologtostderr:
I1123 09:28:30.691523   55705 out.go:360] Setting OutFile to fd 1 ...
I1123 09:28:30.691650   55705 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:30.691660   55705 out.go:374] Setting ErrFile to fd 2...
I1123 09:28:30.691665   55705 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:30.691906   55705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
I1123 09:28:30.692540   55705 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:30.692655   55705 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:30.693128   55705 cli_runner.go:164] Run: docker container inspect functional-776058 --format={{.State.Status}}
I1123 09:28:30.716141   55705 ssh_runner.go:195] Run: systemctl --version
I1123 09:28:30.716198   55705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-776058
I1123 09:28:30.739678   55705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/functional-776058/id_rsa Username:docker}
I1123 09:28:30.846353   55705 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-776058 image ls --format json --alsologtostderr:
[{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-776058"],"size":"2372971"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e
25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"59772801"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["g
cr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbc
ca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:3deee68b581a5738c1164c7f6e8d8d52991c0019658edc887bf0d79f01d91411","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-776058"],"size":"990"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTa
gs":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776058 image ls --format json --alsologtostderr:
I1123 09:28:30.432698   55606 out.go:360] Setting OutFile to fd 1 ...
I1123 09:28:30.433144   55606 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:30.433153   55606 out.go:374] Setting ErrFile to fd 2...
I1123 09:28:30.433158   55606 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:30.433382   55606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
I1123 09:28:30.433997   55606 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:30.434100   55606 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:30.434562   55606 cli_runner.go:164] Run: docker container inspect functional-776058 --format={{.State.Status}}
I1123 09:28:30.457202   55606 ssh_runner.go:195] Run: systemctl --version
I1123 09:28:30.457287   55606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-776058
I1123 09:28:30.482290   55606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/functional-776058/id_rsa Username:docker}
I1123 09:28:30.589799   55606 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-776058 image ls --format yaml --alsologtostderr:
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-776058
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:3deee68b581a5738c1164c7f6e8d8d52991c0019658edc887bf0d79f01d91411
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-776058
size: "990"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776058 image ls --format yaml --alsologtostderr:
I1123 09:28:30.169458   55450 out.go:360] Setting OutFile to fd 1 ...
I1123 09:28:30.169694   55450 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:30.169702   55450 out.go:374] Setting ErrFile to fd 2...
I1123 09:28:30.169706   55450 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:30.169931   55450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
I1123 09:28:30.170549   55450 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:30.170643   55450 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:30.171059   55450 cli_runner.go:164] Run: docker container inspect functional-776058 --format={{.State.Status}}
I1123 09:28:30.192964   55450 ssh_runner.go:195] Run: systemctl --version
I1123 09:28:30.193036   55450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-776058
I1123 09:28:30.214549   55450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/functional-776058/id_rsa Username:docker}
I1123 09:28:30.325572   55450 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 ssh pgrep buildkitd: exit status 1 (321.133241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image build -t localhost/my-image:functional-776058 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-776058 image build -t localhost/my-image:functional-776058 testdata/build --alsologtostderr: (4.203328435s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776058 image build -t localhost/my-image:functional-776058 testdata/build --alsologtostderr:
I1123 09:28:30.522185   55635 out.go:360] Setting OutFile to fd 1 ...
I1123 09:28:30.522413   55635 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:30.522425   55635 out.go:374] Setting ErrFile to fd 2...
I1123 09:28:30.522430   55635 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:28:30.522621   55635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
I1123 09:28:30.523193   55635 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:30.523982   55635 config.go:182] Loaded profile config "functional-776058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 09:28:30.524504   55635 cli_runner.go:164] Run: docker container inspect functional-776058 --format={{.State.Status}}
I1123 09:28:30.545756   55635 ssh_runner.go:195] Run: systemctl --version
I1123 09:28:30.545816   55635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-776058
I1123 09:28:30.566899   55635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/functional-776058/id_rsa Username:docker}
I1123 09:28:30.673475   55635 build_images.go:162] Building image from path: /tmp/build.4215179784.tar
I1123 09:28:30.673547   55635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 09:28:30.683971   55635 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4215179784.tar
I1123 09:28:30.688946   55635 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4215179784.tar: stat -c "%s %y" /var/lib/minikube/build/build.4215179784.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4215179784.tar': No such file or directory
I1123 09:28:30.688980   55635 ssh_runner.go:362] scp /tmp/build.4215179784.tar --> /var/lib/minikube/build/build.4215179784.tar (3072 bytes)
I1123 09:28:30.711435   55635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4215179784
I1123 09:28:30.721666   55635 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4215179784 -xf /var/lib/minikube/build/build.4215179784.tar
I1123 09:28:30.733219   55635 containerd.go:394] Building image: /var/lib/minikube/build/build.4215179784
I1123 09:28:30.733314   55635 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4215179784 --local dockerfile=/var/lib/minikube/build/build.4215179784 --output type=image,name=localhost/my-image:functional-776058
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:b85720227e6e6c7d461d582c9e279e0e9f9b41d40030e337e54af5da9aa5b7a5 0.0s done
#8 exporting config sha256:af7e88a0fe758dd496ae300fc24d4705c9ca94f91756cd6bfe67a34cc6d42d29 0.0s done
#8 naming to localhost/my-image:functional-776058 done
#8 DONE 0.1s
I1123 09:28:34.626412   55635 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4215179784 --local dockerfile=/var/lib/minikube/build/build.4215179784 --output type=image,name=localhost/my-image:functional-776058: (3.893019864s)
I1123 09:28:34.626511   55635 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4215179784
I1123 09:28:34.637828   55635 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4215179784.tar
I1123 09:28:34.648474   55635 build_images.go:218] Built localhost/my-image:functional-776058 from /tmp/build.4215179784.tar
I1123 09:28:34.648512   55635 build_images.go:134] succeeded building to: functional-776058
I1123 09:28:34.648539   55635 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.870558117s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-776058
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image load --daemon kicbase/echo-server:functional-776058 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image load --daemon kicbase/echo-server:functional-776058 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-776058
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image load --daemon kicbase/echo-server:functional-776058 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-776058 image load --daemon kicbase/echo-server:functional-776058 --alsologtostderr: (1.017051547s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdspecific-port2156181795/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.108848ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:28:26.445449    7109 retry.go:31] will retry after 272.914439ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdspecific-port2156181795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
2025/11/23 09:28:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 ssh "sudo umount -f /mount-9p": exit status 1 (305.797875ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-776058 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdspecific-port2156181795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image save kicbase/echo-server:functional-776058 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image rm kicbase/echo-server:functional-776058 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2851727475/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2851727475/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2851727475/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T" /mount1: exit status 1 (448.379095ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:28:28.312363    7109 retry.go:31] will retry after 722.893311ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-776058 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2851727475/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2851727475/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2851727475/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-776058
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-776058 image save --daemon kicbase/echo-server:functional-776058 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-776058
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-776058
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-776058
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-776058
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (144.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1123 09:30:07.370404    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m23.719345851s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (144.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 kubectl -- rollout status deployment/busybox: (3.735471308s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-5klnb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-jwntg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-v97sr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-5klnb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-jwntg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-v97sr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-5klnb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-jwntg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-v97sr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-5klnb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-5klnb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-jwntg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-jwntg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-v97sr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 kubectl -- exec busybox-7b57f96db7-v97sr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 node add --alsologtostderr -v 5: (24.113525238s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5: (1.059919892s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-238306 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp testdata/cp-test.txt ha-238306:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3307614909/001/cp-test_ha-238306.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306:/home/docker/cp-test.txt ha-238306-m02:/home/docker/cp-test_ha-238306_ha-238306-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m02 "sudo cat /home/docker/cp-test_ha-238306_ha-238306-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306:/home/docker/cp-test.txt ha-238306-m03:/home/docker/cp-test_ha-238306_ha-238306-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m03 "sudo cat /home/docker/cp-test_ha-238306_ha-238306-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306:/home/docker/cp-test.txt ha-238306-m04:/home/docker/cp-test_ha-238306_ha-238306-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m04 "sudo cat /home/docker/cp-test_ha-238306_ha-238306-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp testdata/cp-test.txt ha-238306-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3307614909/001/cp-test_ha-238306-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m02:/home/docker/cp-test.txt ha-238306:/home/docker/cp-test_ha-238306-m02_ha-238306.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306 "sudo cat /home/docker/cp-test_ha-238306-m02_ha-238306.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m02:/home/docker/cp-test.txt ha-238306-m03:/home/docker/cp-test_ha-238306-m02_ha-238306-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m03 "sudo cat /home/docker/cp-test_ha-238306-m02_ha-238306-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m02:/home/docker/cp-test.txt ha-238306-m04:/home/docker/cp-test_ha-238306-m02_ha-238306-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m04 "sudo cat /home/docker/cp-test_ha-238306-m02_ha-238306-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp testdata/cp-test.txt ha-238306-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3307614909/001/cp-test_ha-238306-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m03:/home/docker/cp-test.txt ha-238306:/home/docker/cp-test_ha-238306-m03_ha-238306.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306 "sudo cat /home/docker/cp-test_ha-238306-m03_ha-238306.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m03:/home/docker/cp-test.txt ha-238306-m02:/home/docker/cp-test_ha-238306-m03_ha-238306-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m02 "sudo cat /home/docker/cp-test_ha-238306-m03_ha-238306-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m03:/home/docker/cp-test.txt ha-238306-m04:/home/docker/cp-test_ha-238306-m03_ha-238306-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m04 "sudo cat /home/docker/cp-test_ha-238306-m03_ha-238306-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp testdata/cp-test.txt ha-238306-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3307614909/001/cp-test_ha-238306-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m04:/home/docker/cp-test.txt ha-238306:/home/docker/cp-test_ha-238306-m04_ha-238306.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306 "sudo cat /home/docker/cp-test_ha-238306-m04_ha-238306.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m04:/home/docker/cp-test.txt ha-238306-m02:/home/docker/cp-test_ha-238306-m04_ha-238306-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m02 "sudo cat /home/docker/cp-test_ha-238306-m04_ha-238306-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 cp ha-238306-m04:/home/docker/cp-test.txt ha-238306-m03:/home/docker/cp-test_ha-238306-m04_ha-238306-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 ssh -n ha-238306-m03 "sudo cat /home/docker/cp-test_ha-238306-m04_ha-238306-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 node stop m02 --alsologtostderr -v 5: (12.098650101s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5: exit status 7 (755.181849ms)

                                                
                                                
-- stdout --
	ha-238306
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-238306-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-238306-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-238306-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:32:22.142083   77133 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:32:22.142200   77133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:22.142212   77133 out.go:374] Setting ErrFile to fd 2...
	I1123 09:32:22.142218   77133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:22.142458   77133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:32:22.142660   77133 out.go:368] Setting JSON to false
	I1123 09:32:22.142692   77133 mustload.go:66] Loading cluster: ha-238306
	I1123 09:32:22.142796   77133 notify.go:221] Checking for updates...
	I1123 09:32:22.143206   77133 config.go:182] Loaded profile config "ha-238306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:32:22.143234   77133 status.go:174] checking status of ha-238306 ...
	I1123 09:32:22.143830   77133 cli_runner.go:164] Run: docker container inspect ha-238306 --format={{.State.Status}}
	I1123 09:32:22.164886   77133 status.go:371] ha-238306 host status = "Running" (err=<nil>)
	I1123 09:32:22.164916   77133 host.go:66] Checking if "ha-238306" exists ...
	I1123 09:32:22.165234   77133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-238306
	I1123 09:32:22.186649   77133 host.go:66] Checking if "ha-238306" exists ...
	I1123 09:32:22.186956   77133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:32:22.187004   77133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-238306
	I1123 09:32:22.207737   77133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/ha-238306/id_rsa Username:docker}
	I1123 09:32:22.308278   77133 ssh_runner.go:195] Run: systemctl --version
	I1123 09:32:22.315712   77133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:32:22.330047   77133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:32:22.396817   77133 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 09:32:22.385686314 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:32:22.397409   77133 kubeconfig.go:125] found "ha-238306" server: "https://192.168.49.254:8443"
	I1123 09:32:22.397449   77133 api_server.go:166] Checking apiserver status ...
	I1123 09:32:22.397485   77133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:32:22.411008   77133 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup
	W1123 09:32:22.420382   77133 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:32:22.420450   77133 ssh_runner.go:195] Run: ls
	I1123 09:32:22.424412   77133 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 09:32:22.429713   77133 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 09:32:22.429740   77133 status.go:463] ha-238306 apiserver status = Running (err=<nil>)
	I1123 09:32:22.429748   77133 status.go:176] ha-238306 status: &{Name:ha-238306 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:32:22.429771   77133 status.go:174] checking status of ha-238306-m02 ...
	I1123 09:32:22.430016   77133 cli_runner.go:164] Run: docker container inspect ha-238306-m02 --format={{.State.Status}}
	I1123 09:32:22.451219   77133 status.go:371] ha-238306-m02 host status = "Stopped" (err=<nil>)
	I1123 09:32:22.451245   77133 status.go:384] host is not running, skipping remaining checks
	I1123 09:32:22.451253   77133 status.go:176] ha-238306-m02 status: &{Name:ha-238306-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:32:22.451274   77133 status.go:174] checking status of ha-238306-m03 ...
	I1123 09:32:22.451553   77133 cli_runner.go:164] Run: docker container inspect ha-238306-m03 --format={{.State.Status}}
	I1123 09:32:22.471021   77133 status.go:371] ha-238306-m03 host status = "Running" (err=<nil>)
	I1123 09:32:22.471044   77133 host.go:66] Checking if "ha-238306-m03" exists ...
	I1123 09:32:22.471365   77133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-238306-m03
	I1123 09:32:22.492145   77133 host.go:66] Checking if "ha-238306-m03" exists ...
	I1123 09:32:22.492505   77133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:32:22.492542   77133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-238306-m03
	I1123 09:32:22.512124   77133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/ha-238306-m03/id_rsa Username:docker}
	I1123 09:32:22.614141   77133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:32:22.627925   77133 kubeconfig.go:125] found "ha-238306" server: "https://192.168.49.254:8443"
	I1123 09:32:22.627951   77133 api_server.go:166] Checking apiserver status ...
	I1123 09:32:22.627981   77133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:32:22.640308   77133 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1317/cgroup
	W1123 09:32:22.649438   77133 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1317/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:32:22.649517   77133 ssh_runner.go:195] Run: ls
	I1123 09:32:22.653581   77133 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 09:32:22.657818   77133 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 09:32:22.657841   77133 status.go:463] ha-238306-m03 apiserver status = Running (err=<nil>)
	I1123 09:32:22.657849   77133 status.go:176] ha-238306-m03 status: &{Name:ha-238306-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:32:22.657866   77133 status.go:174] checking status of ha-238306-m04 ...
	I1123 09:32:22.658104   77133 cli_runner.go:164] Run: docker container inspect ha-238306-m04 --format={{.State.Status}}
	I1123 09:32:22.676672   77133 status.go:371] ha-238306-m04 host status = "Running" (err=<nil>)
	I1123 09:32:22.676702   77133 host.go:66] Checking if "ha-238306-m04" exists ...
	I1123 09:32:22.676969   77133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-238306-m04
	I1123 09:32:22.696540   77133 host.go:66] Checking if "ha-238306-m04" exists ...
	I1123 09:32:22.696810   77133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:32:22.696861   77133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-238306-m04
	I1123 09:32:22.717256   77133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/ha-238306-m04/id_rsa Username:docker}
	I1123 09:32:22.816819   77133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:32:22.829965   77133 status.go:176] ha-238306-m04 status: &{Name:ha-238306-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1123 09:32:23.501720    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 node start m02 --alsologtostderr -v 5: (8.072121341s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 stop --alsologtostderr -v 5
E1123 09:32:51.214447    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:03.781953    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:03.788496    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:03.800074    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:03.821574    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:03.863049    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:03.944534    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:04.106123    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:04.427902    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:05.069985    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:06.351655    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:08.914594    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 stop --alsologtostderr -v 5: (37.488988785s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 start --wait true --alsologtostderr -v 5
E1123 09:33:14.036495    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:24.278763    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:33:44.760985    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 start --wait true --alsologtostderr -v 5: (59.772734903s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 node delete m03 --alsologtostderr -v 5: (8.744361049s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 stop --alsologtostderr -v 5
E1123 09:34:25.724181    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 stop --alsologtostderr -v 5: (36.200139621s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5: exit status 7 (127.018637ms)

                                                
                                                
-- stdout --
	ha-238306
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-238306-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-238306-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:34:57.754020   93390 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:34:57.754570   93390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:34:57.754580   93390 out.go:374] Setting ErrFile to fd 2...
	I1123 09:34:57.754585   93390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:34:57.754826   93390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:34:57.755058   93390 out.go:368] Setting JSON to false
	I1123 09:34:57.755094   93390 mustload.go:66] Loading cluster: ha-238306
	I1123 09:34:57.755212   93390 notify.go:221] Checking for updates...
	I1123 09:34:57.755574   93390 config.go:182] Loaded profile config "ha-238306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:34:57.755596   93390 status.go:174] checking status of ha-238306 ...
	I1123 09:34:57.756098   93390 cli_runner.go:164] Run: docker container inspect ha-238306 --format={{.State.Status}}
	I1123 09:34:57.775540   93390 status.go:371] ha-238306 host status = "Stopped" (err=<nil>)
	I1123 09:34:57.775576   93390 status.go:384] host is not running, skipping remaining checks
	I1123 09:34:57.775583   93390 status.go:176] ha-238306 status: &{Name:ha-238306 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:34:57.775610   93390 status.go:174] checking status of ha-238306-m02 ...
	I1123 09:34:57.775893   93390 cli_runner.go:164] Run: docker container inspect ha-238306-m02 --format={{.State.Status}}
	I1123 09:34:57.795297   93390 status.go:371] ha-238306-m02 host status = "Stopped" (err=<nil>)
	I1123 09:34:57.795358   93390 status.go:384] host is not running, skipping remaining checks
	I1123 09:34:57.795376   93390 status.go:176] ha-238306-m02 status: &{Name:ha-238306-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:34:57.795407   93390 status.go:174] checking status of ha-238306-m04 ...
	I1123 09:34:57.795695   93390 cli_runner.go:164] Run: docker container inspect ha-238306-m04 --format={{.State.Status}}
	I1123 09:34:57.814939   93390 status.go:371] ha-238306-m04 host status = "Stopped" (err=<nil>)
	I1123 09:34:57.814963   93390 status.go:384] host is not running, skipping remaining checks
	I1123 09:34:57.814969   93390 status.go:176] ha-238306-m04 status: &{Name:ha-238306-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1123 09:35:47.646577    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (53.387207131s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-238306 node add --control-plane --alsologtostderr -v 5: (37.528676775s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-238306 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-091445 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-091445 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (38.487685329s)
--- PASS: TestJSONOutput/start/Command (38.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-091445 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-091445 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-091445 --output=json --user=testUser
E1123 09:37:23.504544    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-091445 --output=json --user=testUser: (5.982826535s)
--- PASS: TestJSONOutput/stop/Command (5.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-886962 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-886962 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.461428ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"879fbb56-a3ad-4556-8d29-3dbde1c23902","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-886962] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bfd8530-2216-4b66-850d-ad2a2dddbae2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21968"}}
	{"specversion":"1.0","id":"72d8b90d-97da-49e2-9d28-3da1b59d4fa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c9a31f57-7d4f-4917-860a-47b9a24ab8b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig"}}
	{"specversion":"1.0","id":"9b63b046-c49c-4e3d-bf12-ae6927afcdea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube"}}
	{"specversion":"1.0","id":"f3e828cc-778c-49d1-b068-3f0bab145f8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"50a814f6-198c-407a-9302-fa406126bda6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dba36f9c-d78e-4ae3-b2f3-f1833379aab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-886962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-886962
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-240273 --network=
E1123 09:38:03.781906    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-240273 --network=: (34.330818805s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-240273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-240273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-240273: (2.24570014s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-803239 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-803239 --network=bridge: (21.50024244s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-803239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-803239
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-803239: (2.064185148s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.59s)

                                                
                                    
x
+
TestKicExistingNetwork (24.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 09:38:31.400590    7109 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 09:38:31.419232    7109 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 09:38:31.419296    7109 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 09:38:31.419315    7109 cli_runner.go:164] Run: docker network inspect existing-network
W1123 09:38:31.437731    7109 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 09:38:31.437769    7109 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 09:38:31.437800    7109 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 09:38:31.437930    7109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 09:38:31.458803    7109 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-de5cba392bb4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8d:f5:88:bc:8b} reservation:<nil>}
I1123 09:38:31.459251    7109 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020c6f20}
I1123 09:38:31.459278    7109 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 09:38:31.459355    7109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
E1123 09:38:31.488808    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1123 09:38:31.512538    7109 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-562369 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-562369 --network=existing-network: (22.60897206s)
helpers_test.go:175: Cleaning up "existing-network-562369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-562369
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-562369: (2.039369304s)
I1123 09:38:56.181814    7109 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.80s)

                                                
                                    
x
+
TestKicCustomSubnet (27.76s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-558232 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-558232 --subnet=192.168.60.0/24: (25.502392947s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-558232 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-558232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-558232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-558232: (2.234968754s)
--- PASS: TestKicCustomSubnet (27.76s)

                                                
                                    
x
+
TestKicStaticIP (28.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-117541 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-117541 --static-ip=192.168.200.200: (26.383396814s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-117541 ip
helpers_test.go:175: Cleaning up "static-ip-117541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-117541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-117541: (2.206797576s)
--- PASS: TestKicStaticIP (28.74s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-009564 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-009564 --driver=docker  --container-runtime=containerd: (23.264213118s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-012000 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-012000 --driver=docker  --container-runtime=containerd: (22.940290131s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-009564
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-012000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-012000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-012000
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-012000: (2.401480075s)
helpers_test.go:175: Cleaning up "first-009564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-009564
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-009564: (2.401314557s)
--- PASS: TestMinikubeProfile (52.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-835161 --memory=3072 --mount-string /tmp/TestMountStartserial4086988752/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-835161 --memory=3072 --mount-string /tmp/TestMountStartserial4086988752/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.673677533s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-835161 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-845701 --memory=3072 --mount-string /tmp/TestMountStartserial4086988752/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-845701 --memory=3072 --mount-string /tmp/TestMountStartserial4086988752/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.69565152s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-845701 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-835161 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-835161 --alsologtostderr -v=5: (1.735230851s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-845701 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-845701
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-845701: (1.272617227s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-845701
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-845701: (6.599482016s)
--- PASS: TestMountStart/serial/RestartStopped (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-845701 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-443516 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1123 09:42:23.502689    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-443516 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m37.453376553s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-443516 -- rollout status deployment/busybox: (3.38116992s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-2j6z4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-f57wh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-2j6z4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-f57wh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-2j6z4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-f57wh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-2j6z4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-2j6z4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-f57wh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-443516 -- exec busybox-7b57f96db7-f57wh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-443516 -v=5 --alsologtostderr
E1123 09:43:03.782067    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-443516 -v=5 --alsologtostderr: (26.231512078s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.92s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-443516 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp testdata/cp-test.txt multinode-443516:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp multinode-443516:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2023520773/001/cp-test_multinode-443516.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp multinode-443516:/home/docker/cp-test.txt multinode-443516-m02:/home/docker/cp-test_multinode-443516_multinode-443516-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m02 "sudo cat /home/docker/cp-test_multinode-443516_multinode-443516-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp multinode-443516:/home/docker/cp-test.txt multinode-443516-m03:/home/docker/cp-test_multinode-443516_multinode-443516-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m03 "sudo cat /home/docker/cp-test_multinode-443516_multinode-443516-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp testdata/cp-test.txt multinode-443516-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp multinode-443516-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2023520773/001/cp-test_multinode-443516-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp multinode-443516-m02:/home/docker/cp-test.txt multinode-443516:/home/docker/cp-test_multinode-443516-m02_multinode-443516.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516 "sudo cat /home/docker/cp-test_multinode-443516-m02_multinode-443516.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp multinode-443516-m02:/home/docker/cp-test.txt multinode-443516-m03:/home/docker/cp-test_multinode-443516-m02_multinode-443516-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m03 "sudo cat /home/docker/cp-test_multinode-443516-m02_multinode-443516-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp testdata/cp-test.txt multinode-443516-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp multinode-443516-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2023520773/001/cp-test_multinode-443516-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp multinode-443516-m03:/home/docker/cp-test.txt multinode-443516:/home/docker/cp-test_multinode-443516-m03_multinode-443516.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516 "sudo cat /home/docker/cp-test_multinode-443516-m03_multinode-443516.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 cp multinode-443516-m03:/home/docker/cp-test.txt multinode-443516-m02:/home/docker/cp-test_multinode-443516-m03_multinode-443516-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 ssh -n multinode-443516-m02 "sudo cat /home/docker/cp-test_multinode-443516-m03_multinode-443516-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-443516 node stop m03: (1.29145628s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-443516 status: exit status 7 (536.967519ms)

                                                
                                                
-- stdout --
	multinode-443516
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-443516-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-443516-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-443516 status --alsologtostderr: exit status 7 (528.169621ms)

                                                
                                                
-- stdout --
	multinode-443516
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-443516-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-443516-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:43:35.219151  155605 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:43:35.219441  155605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:43:35.219452  155605 out.go:374] Setting ErrFile to fd 2...
	I1123 09:43:35.219459  155605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:43:35.219688  155605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:43:35.219882  155605 out.go:368] Setting JSON to false
	I1123 09:43:35.219917  155605 mustload.go:66] Loading cluster: multinode-443516
	I1123 09:43:35.220032  155605 notify.go:221] Checking for updates...
	I1123 09:43:35.220299  155605 config.go:182] Loaded profile config "multinode-443516": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:43:35.220320  155605 status.go:174] checking status of multinode-443516 ...
	I1123 09:43:35.220786  155605 cli_runner.go:164] Run: docker container inspect multinode-443516 --format={{.State.Status}}
	I1123 09:43:35.240436  155605 status.go:371] multinode-443516 host status = "Running" (err=<nil>)
	I1123 09:43:35.240481  155605 host.go:66] Checking if "multinode-443516" exists ...
	I1123 09:43:35.240794  155605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-443516
	I1123 09:43:35.259701  155605 host.go:66] Checking if "multinode-443516" exists ...
	I1123 09:43:35.260013  155605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:43:35.260060  155605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-443516
	I1123 09:43:35.278697  155605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/multinode-443516/id_rsa Username:docker}
	I1123 09:43:35.380082  155605 ssh_runner.go:195] Run: systemctl --version
	I1123 09:43:35.386863  155605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:43:35.400435  155605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:43:35.458710  155605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-23 09:43:35.447160994 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:43:35.459262  155605 kubeconfig.go:125] found "multinode-443516" server: "https://192.168.67.2:8443"
	I1123 09:43:35.459291  155605 api_server.go:166] Checking apiserver status ...
	I1123 09:43:35.459357  155605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:43:35.472279  155605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1299/cgroup
	W1123 09:43:35.481693  155605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1299/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:43:35.481743  155605 ssh_runner.go:195] Run: ls
	I1123 09:43:35.486164  155605 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 09:43:35.491743  155605 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 09:43:35.491772  155605 status.go:463] multinode-443516 apiserver status = Running (err=<nil>)
	I1123 09:43:35.491781  155605 status.go:176] multinode-443516 status: &{Name:multinode-443516 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:43:35.491797  155605 status.go:174] checking status of multinode-443516-m02 ...
	I1123 09:43:35.492031  155605 cli_runner.go:164] Run: docker container inspect multinode-443516-m02 --format={{.State.Status}}
	I1123 09:43:35.512669  155605 status.go:371] multinode-443516-m02 host status = "Running" (err=<nil>)
	I1123 09:43:35.512699  155605 host.go:66] Checking if "multinode-443516-m02" exists ...
	I1123 09:43:35.512940  155605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-443516-m02
	I1123 09:43:35.532553  155605 host.go:66] Checking if "multinode-443516-m02" exists ...
	I1123 09:43:35.532834  155605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:43:35.532881  155605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-443516-m02
	I1123 09:43:35.552996  155605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21968-3552/.minikube/machines/multinode-443516-m02/id_rsa Username:docker}
	I1123 09:43:35.653654  155605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:43:35.666867  155605 status.go:176] multinode-443516-m02 status: &{Name:multinode-443516-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:43:35.666905  155605 status.go:174] checking status of multinode-443516-m03 ...
	I1123 09:43:35.667173  155605 cli_runner.go:164] Run: docker container inspect multinode-443516-m03 --format={{.State.Status}}
	I1123 09:43:35.686163  155605 status.go:371] multinode-443516-m03 host status = "Stopped" (err=<nil>)
	I1123 09:43:35.686183  155605 status.go:384] host is not running, skipping remaining checks
	I1123 09:43:35.686191  155605 status.go:176] multinode-443516-m03 status: &{Name:multinode-443516-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-443516 node start m03 -v=5 --alsologtostderr: (6.354182338s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (68.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-443516
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-443516
E1123 09:43:46.578138    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-443516: (25.162417724s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-443516 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-443516 --wait=true -v=5 --alsologtostderr: (43.610113652s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-443516
--- PASS: TestMultiNode/serial/RestartKeepsNodes (68.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-443516 node delete m03: (4.756306607s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-443516 stop: (23.951032998s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-443516 status: exit status 7 (108.697709ms)

                                                
                                                
-- stdout --
	multinode-443516
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-443516-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-443516 status --alsologtostderr: exit status 7 (102.187418ms)

                                                
                                                
-- stdout --
	multinode-443516
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-443516-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:45:21.196548  165278 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:45:21.196827  165278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:45:21.196838  165278 out.go:374] Setting ErrFile to fd 2...
	I1123 09:45:21.196842  165278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:45:21.197041  165278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:45:21.197222  165278 out.go:368] Setting JSON to false
	I1123 09:45:21.197253  165278 mustload.go:66] Loading cluster: multinode-443516
	I1123 09:45:21.197382  165278 notify.go:221] Checking for updates...
	I1123 09:45:21.197622  165278 config.go:182] Loaded profile config "multinode-443516": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:45:21.197640  165278 status.go:174] checking status of multinode-443516 ...
	I1123 09:45:21.198065  165278 cli_runner.go:164] Run: docker container inspect multinode-443516 --format={{.State.Status}}
	I1123 09:45:21.217223  165278 status.go:371] multinode-443516 host status = "Stopped" (err=<nil>)
	I1123 09:45:21.217280  165278 status.go:384] host is not running, skipping remaining checks
	I1123 09:45:21.217290  165278 status.go:176] multinode-443516 status: &{Name:multinode-443516 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:45:21.217368  165278 status.go:174] checking status of multinode-443516-m02 ...
	I1123 09:45:21.217658  165278 cli_runner.go:164] Run: docker container inspect multinode-443516-m02 --format={{.State.Status}}
	I1123 09:45:21.236490  165278 status.go:371] multinode-443516-m02 host status = "Stopped" (err=<nil>)
	I1123 09:45:21.236525  165278 status.go:384] host is not running, skipping remaining checks
	I1123 09:45:21.236533  165278 status.go:176] multinode-443516-m02 status: &{Name:multinode-443516-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-443516 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-443516 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.171711851s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-443516 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-443516
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-443516-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-443516-m02 --driver=docker  --container-runtime=containerd: exit status 14 (89.264836ms)

                                                
                                                
-- stdout --
	* [multinode-443516-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-443516-m02' is duplicated with machine name 'multinode-443516-m02' in profile 'multinode-443516'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-443516-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-443516-m03 --driver=docker  --container-runtime=containerd: (24.120026218s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-443516
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-443516: exit status 80 (318.366343ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-443516 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-443516-m03 already exists in multinode-443516-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-443516-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-443516-m03: (2.432970607s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.03s)

                                                
                                    
x
+
TestPreload (116.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-827382 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E1123 09:47:23.501778    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-827382 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (47.466179698s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-827382 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-827382 image pull gcr.io/k8s-minikube/busybox: (2.360515186s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-827382
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-827382: (6.80137871s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-827382 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1123 09:48:03.782104    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-827382 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (57.220665366s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-827382 image list
helpers_test.go:175: Cleaning up "test-preload-827382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-827382
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-827382: (2.533655779s)
--- PASS: TestPreload (116.63s)

                                                
                                    
x
+
TestScheduledStopUnix (99.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-393719 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-393719 --memory=3072 --driver=docker  --container-runtime=containerd: (23.574991212s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-393719 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 09:49:00.735161  183469 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:49:00.735306  183469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:49:00.735320  183469 out.go:374] Setting ErrFile to fd 2...
	I1123 09:49:00.735325  183469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:49:00.735634  183469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:49:00.735955  183469 out.go:368] Setting JSON to false
	I1123 09:49:00.736093  183469 mustload.go:66] Loading cluster: scheduled-stop-393719
	I1123 09:49:00.736586  183469 config.go:182] Loaded profile config "scheduled-stop-393719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:49:00.736676  183469 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/config.json ...
	I1123 09:49:00.736950  183469 mustload.go:66] Loading cluster: scheduled-stop-393719
	I1123 09:49:00.737075  183469 config.go:182] Loaded profile config "scheduled-stop-393719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-393719 -n scheduled-stop-393719
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-393719 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 09:49:01.149758  183619 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:49:01.150035  183619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:49:01.150046  183619 out.go:374] Setting ErrFile to fd 2...
	I1123 09:49:01.150050  183619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:49:01.150284  183619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:49:01.150582  183619 out.go:368] Setting JSON to false
	I1123 09:49:01.150811  183619 daemonize_unix.go:73] killing process 183505 as it is an old scheduled stop
	I1123 09:49:01.150925  183619 mustload.go:66] Loading cluster: scheduled-stop-393719
	I1123 09:49:01.151413  183619 config.go:182] Loaded profile config "scheduled-stop-393719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:49:01.151519  183619 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/config.json ...
	I1123 09:49:01.151764  183619 mustload.go:66] Loading cluster: scheduled-stop-393719
	I1123 09:49:01.151919  183619 config.go:182] Loaded profile config "scheduled-stop-393719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 09:49:01.156835    7109 retry.go:31] will retry after 89.865µs: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.157972    7109 retry.go:31] will retry after 129.151µs: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.159083    7109 retry.go:31] will retry after 141.097µs: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.160230    7109 retry.go:31] will retry after 349.152µs: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.161404    7109 retry.go:31] will retry after 407.812µs: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.162536    7109 retry.go:31] will retry after 966.52µs: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.163693    7109 retry.go:31] will retry after 1.451196ms: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.165917    7109 retry.go:31] will retry after 879.766µs: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.167067    7109 retry.go:31] will retry after 2.966318ms: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.170294    7109 retry.go:31] will retry after 4.383019ms: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.175550    7109 retry.go:31] will retry after 2.911085ms: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.178792    7109 retry.go:31] will retry after 10.821263ms: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.190041    7109 retry.go:31] will retry after 14.74852ms: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.205315    7109 retry.go:31] will retry after 22.705601ms: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.228596    7109 retry.go:31] will retry after 23.780327ms: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
I1123 09:49:01.252869    7109 retry.go:31] will retry after 65.577053ms: open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-393719 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-393719 -n scheduled-stop-393719
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-393719
E1123 09:49:26.851045    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-393719 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 09:49:27.127798  184507 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:49:27.128188  184507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:49:27.128201  184507 out.go:374] Setting ErrFile to fd 2...
	I1123 09:49:27.128208  184507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:49:27.128497  184507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:49:27.128759  184507 out.go:368] Setting JSON to false
	I1123 09:49:27.128837  184507 mustload.go:66] Loading cluster: scheduled-stop-393719
	I1123 09:49:27.129150  184507 config.go:182] Loaded profile config "scheduled-stop-393719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:49:27.129221  184507 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/scheduled-stop-393719/config.json ...
	I1123 09:49:27.129444  184507 mustload.go:66] Loading cluster: scheduled-stop-393719
	I1123 09:49:27.129553  184507 config.go:182] Loaded profile config "scheduled-stop-393719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-393719
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-393719: exit status 7 (85.635621ms)

                                                
                                                
-- stdout --
	scheduled-stop-393719
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-393719 -n scheduled-stop-393719
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-393719 -n scheduled-stop-393719: exit status 7 (82.983313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-393719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-393719
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-393719: (4.379961423s)
--- PASS: TestScheduledStopUnix (99.59s)

                                                
                                    
x
+
TestInsufficientStorage (9.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-589773 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-589773 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.053289519s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1255e823-5f53-4037-b30d-f95492474607","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-589773] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b9d3a35-840b-40cf-a34c-7ffbdb54a237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21968"}}
	{"specversion":"1.0","id":"aea26a35-2a97-49c2-be0f-fde7f591abac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"43396788-4778-4ba0-8230-fe19ee4377d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig"}}
	{"specversion":"1.0","id":"a59fb401-ccb2-478c-ba34-1e60f0558d97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube"}}
	{"specversion":"1.0","id":"a05972d5-a113-4478-b5bb-3e74ca0fbdd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"19bc4720-769c-4b11-ab86-5c5c41eb6337","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"34794da6-0286-487c-a1aa-e2bf6baa64e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"163ad17c-0d06-451a-ac25-bd3db8f34d6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e080133e-8761-4f81-881e-03079947ba54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f0aff04-39bc-47a3-8fa1-8c88e0a17467","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d47e5790-e7a5-410d-96bc-43b3e8995cd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-589773\" primary control-plane node in \"insufficient-storage-589773\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4305a6a8-112c-4e8c-826e-eb20bfa00b70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"83c73d6a-13e4-4845-b90c-ad2eaa834cc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d9581f3-284e-401d-9537-489256100ba0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-589773 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-589773 --output=json --layout=cluster: exit status 7 (321.18737ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-589773","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-589773","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 09:50:24.046269  186762 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-589773" does not appear in /home/jenkins/minikube-integration/21968-3552/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-589773 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-589773 --output=json --layout=cluster: exit status 7 (320.97985ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-589773","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-589773","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 09:50:24.366594  186870 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-589773" does not appear in /home/jenkins/minikube-integration/21968-3552/kubeconfig
	E1123 09:50:24.378756  186870 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/insufficient-storage-589773/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-589773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-589773
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-589773: (1.949609442s)
--- PASS: TestInsufficientStorage (9.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (57.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2223957466 start -p running-upgrade-912004 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2223957466 start -p running-upgrade-912004 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (29.651577586s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-912004 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-912004 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.697366908s)
helpers_test.go:175: Cleaning up "running-upgrade-912004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-912004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-912004: (3.426149026s)
--- PASS: TestRunningBinaryUpgrade (57.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (149.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-816986 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-816986 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.971043378s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-816986
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-816986: (1.932696029s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-816986 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-816986 status --format={{.Host}}: exit status 7 (92.107441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-816986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1123 09:53:03.781441    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-816986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m49.16739009s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-816986 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-816986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-816986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (109.421319ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-816986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-816986
	    minikube start -p kubernetes-upgrade-816986 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8169862 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-816986 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-816986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-816986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (10.212938874s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-816986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-816986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-816986: (2.56445216s)
--- PASS: TestKubernetesUpgrade (149.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (102.83s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3852374712 start -p missing-upgrade-914321 --memory=3072 --driver=docker  --container-runtime=containerd
E1123 09:52:23.502460    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/addons-300235/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3852374712 start -p missing-upgrade-914321 --memory=3072 --driver=docker  --container-runtime=containerd: (25.75041306s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-914321
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-914321
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-914321 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-914321 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.244991654s)
helpers_test.go:175: Cleaning up "missing-upgrade-914321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-914321
I1123 09:53:59.001395    7109 config.go:182] Loaded profile config "auto-676928": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-914321: (2.179126992s)
--- PASS: TestMissingContainerUpgrade (102.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.83s)

                                                
                                    
x
+
TestPause/serial/Start (56.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-894987 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-894987 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (56.238861315s)
--- PASS: TestPause/serial/Start (56.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-033672 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-033672 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (100.709831ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-033672] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-033672 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-033672 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.08645861s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-033672 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.768320723 start -p stopped-upgrade-918790 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.768320723 start -p stopped-upgrade-918790 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m14.022279072s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.768320723 -p stopped-upgrade-918790 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.768320723 -p stopped-upgrade-918790 stop: (4.395892581s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-918790 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-918790 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.155572514s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (105.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-033672 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-033672 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.133257911s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-033672 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-033672 status -o json: exit status 2 (400.980623ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-033672","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-033672
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-033672: (2.879349495s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-033672 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-033672 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (5.298263903s)
--- PASS: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21968-3552/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-033672 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-033672 "sudo systemctl is-active --quiet service kubelet": exit status 1 (318.011261ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-033672
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-033672: (3.501831635s)
--- PASS: TestNoKubernetes/serial/Stop (3.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-894987 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-894987 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.685920579s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-033672 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-033672 --driver=docker  --container-runtime=containerd: (7.323639979s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-676928 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-676928 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (198.477604ms)

                                                
                                                
-- stdout --
	* [false-676928] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:51:29.487614  204130 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:51:29.487943  204130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:51:29.487955  204130 out.go:374] Setting ErrFile to fd 2...
	I1123 09:51:29.487961  204130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:51:29.488219  204130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3552/.minikube/bin
	I1123 09:51:29.488790  204130 out.go:368] Setting JSON to false
	I1123 09:51:29.490064  204130 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2029,"bootTime":1763889461,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:51:29.490133  204130 start.go:143] virtualization: kvm guest
	I1123 09:51:29.493580  204130 out.go:179] * [false-676928] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:51:29.495615  204130 notify.go:221] Checking for updates...
	I1123 09:51:29.495632  204130 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:51:29.497633  204130 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:51:29.499949  204130 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3552/kubeconfig
	I1123 09:51:29.501810  204130 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3552/.minikube
	I1123 09:51:29.504266  204130 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:51:29.506488  204130 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:51:29.509082  204130 config.go:182] Loaded profile config "NoKubernetes-033672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1123 09:51:29.509230  204130 config.go:182] Loaded profile config "pause-894987": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:51:29.509408  204130 config.go:182] Loaded profile config "stopped-upgrade-918790": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1123 09:51:29.509537  204130 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:51:29.541788  204130 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:51:29.541912  204130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:51:29.608618  204130 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:51:29.5971171 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:51:29.608757  204130 docker.go:319] overlay module found
	I1123 09:51:29.611436  204130 out.go:179] * Using the docker driver based on user configuration
	I1123 09:51:29.613041  204130 start.go:309] selected driver: docker
	I1123 09:51:29.613063  204130 start.go:927] validating driver "docker" against <nil>
	I1123 09:51:29.613079  204130 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:51:29.615456  204130 out.go:203] 
	W1123 09:51:29.617265  204130 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1123 09:51:29.619066  204130 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-676928 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-676928" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:51:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-894987
contexts:
- context:
cluster: pause-894987
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:51:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-894987
name: pause-894987
current-context: pause-894987
kind: Config
users:
- name: pause-894987
user:
client-certificate: /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/pause-894987/client.crt
client-key: /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/pause-894987/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-676928

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-676928"

                                                
                                                
----------------------- debugLogs end: false-676928 [took: 3.986401288s] --------------------------------
helpers_test.go:175: Cleaning up "false-676928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-676928
--- PASS: TestNetworkPlugins/group/false (4.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-894987 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-894987 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-894987 --output=json --layout=cluster: exit status 2 (387.037858ms)

                                                
                                                
-- stdout --
	{"Name":"pause-894987","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-894987","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-894987 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-033672 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-033672 "sudo systemctl is-active --quiet service kubelet": exit status 1 (365.13738ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-894987 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.93s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-894987 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-894987 --alsologtostderr -v=5: (3.929295265s)
--- PASS: TestPause/serial/DeletePaused (3.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.498928794s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-894987
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-894987: exit status 1 (23.259033ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-894987: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-918790
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-918790: (1.372104307s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (46.866830792s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-676928 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-676928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-698pk" [610dce49-6780-40a6-86a4-45049aa52d32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-698pk" [610dce49-6780-40a6-86a4-45049aa52d32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003711859s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (39.24504029s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-676928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (58.726248713s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-wdwl8" [0bd1ad5e-f489-45f3-80b5-8accccf692cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00367807s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-676928 "pgrep -a kubelet"
I1123 09:54:46.614375    7109 config.go:182] Loaded profile config "kindnet-676928": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-676928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-59jz4" [911231bc-d495-4e74-a3f3-fbcb18f49df9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-59jz4" [911231bc-d495-4e74-a3f3-fbcb18f49df9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003668659s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-676928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m2.248915233s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m2.618522376s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.460454509s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-vh4zx" [26956f24-9d11-49c7-b566-b37df8d00cfa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004429281s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-676928 "pgrep -a kubelet"
I1123 09:55:34.503380    7109 config.go:182] Loaded profile config "calico-676928": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-676928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p86kw" [80c5c2e9-793c-4845-9790-af7794ec4137] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p86kw" [80c5c2e9-793c-4845-9790-af7794ec4137] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.010840149s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-676928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-676928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m3.663105352s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-676928 "pgrep -a kubelet"
I1123 09:56:07.188284    7109 config.go:182] Loaded profile config "custom-flannel-676928": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-676928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5wzdk" [3453eb28-2eb2-4507-9715-4bd3612eb83e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5wzdk" [3453eb28-2eb2-4507-9715-4bd3612eb83e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004348114s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nwlfk" [280b4c82-a927-4116-9fc4-a001947ec0b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003960834s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-676928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-676928 "pgrep -a kubelet"
I1123 09:56:19.195267    7109 config.go:182] Loaded profile config "enable-default-cni-676928": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-676928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ft72l" [b15c88ad-df37-4189-bfb9-73ff180e44a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ft72l" [b15c88ad-df37-4189-bfb9-73ff180e44a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.005064533s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-676928 "pgrep -a kubelet"
I1123 09:56:23.630027    7109 config.go:182] Loaded profile config "flannel-676928": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-676928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-84qpj" [ffcfa580-1e4a-4679-8d1b-1f337cc3a302] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-84qpj" [ffcfa580-1e4a-4679-8d1b-1f337cc3a302] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004557411s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-676928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-676928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (56.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-709593 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-709593 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (56.487344472s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (56.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-309734 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-309734 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (57.405116071s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-412583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-412583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (48.52440226s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-676928 "pgrep -a kubelet"
I1123 09:57:09.157362    7109 config.go:182] Loaded profile config "bridge-676928": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-676928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8s4bq" [a54b912b-fef3-4c67-b2be-be7d52e8fa8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8s4bq" [a54b912b-fef3-4c67-b2be-be7d52e8fa8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.009020793s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-676928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-676928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (42.414528173s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-709593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-709593 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-709593 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-709593 --alsologtostderr -v=3: (12.19330449s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-412583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-412583 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-309734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-309734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.041720458s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-309734 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-412583 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-412583 --alsologtostderr -v=3: (12.254705612s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-309734 --alsologtostderr -v=3
E1123 09:58:03.781924    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/functional-776058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-309734 --alsologtostderr -v=3: (12.330904251s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-709593 -n old-k8s-version-709593
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-709593 -n old-k8s-version-709593: exit status 7 (91.12275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-709593 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-709593 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-709593 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (46.84422934s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-709593 -n old-k8s-version-709593
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412583 -n embed-certs-412583
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412583 -n embed-certs-412583: exit status 7 (100.792703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-412583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-309734 -n no-preload-309734
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-309734 -n no-preload-309734: exit status 7 (105.826468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-309734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-412583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-412583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.851492861s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412583 -n embed-certs-412583
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-309734 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-309734 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.525907751s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-309734 -n no-preload-309734
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-696492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-696492 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-696492 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-696492 --alsologtostderr -v=3: (12.179567811s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492: exit status 7 (88.570907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-696492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-696492 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.055620745s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lbcz9" [c45feaa9-cf33-4cbb-b9f0-4b667069a2c9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004801004s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lbcz9" [c45feaa9-cf33-4cbb-b9f0-4b667069a2c9] Running
E1123 09:58:59.227165    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:58:59.233616    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:58:59.245051    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:58:59.266452    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:58:59.307874    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:58:59.389395    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:58:59.551549    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:58:59.873636    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:59:00.515467    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:59:01.797488    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003750882s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-709593 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k899g" [a7ca4476-d6b3-41e9-9978-7ed14e97d9ab] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005060199s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-96phv" [7aea903c-263c-4c21-93d3-d73bc0b63e30] Running
E1123 09:59:04.359285    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00421096s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-709593 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-709593 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-709593 -n old-k8s-version-709593
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-709593 -n old-k8s-version-709593: exit status 2 (354.756361ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-709593 -n old-k8s-version-709593
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-709593 -n old-k8s-version-709593: exit status 2 (373.290688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-709593 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-709593 -n old-k8s-version-709593
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-709593 -n old-k8s-version-709593
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k899g" [a7ca4476-d6b3-41e9-9978-7ed14e97d9ab] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003984757s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-309734 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-96phv" [7aea903c-263c-4c21-93d3-d73bc0b63e30] Running
E1123 09:59:09.481513    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003461536s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-412583 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-859897 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-859897 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (28.628775494s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-309734 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-412583 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-309734 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-309734 -n no-preload-309734
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-309734 -n no-preload-309734: exit status 2 (406.111858ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-309734 -n no-preload-309734
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-309734 -n no-preload-309734: exit status 2 (382.666664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-309734 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-309734 --alsologtostderr -v=1: (1.085164712s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-309734 -n no-preload-309734
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-309734 -n no-preload-309734
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-412583 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412583 -n embed-certs-412583
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412583 -n embed-certs-412583: exit status 2 (404.089485ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-412583 -n embed-certs-412583
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-412583 -n embed-certs-412583: exit status 2 (418.695651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-412583 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-412583 --alsologtostderr -v=1: (1.020557847s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412583 -n embed-certs-412583
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-412583 -n embed-certs-412583
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9gp7g" [8f426f13-a224-46ac-990a-35856e302371] Running
E1123 09:59:40.204898    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/auto-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:59:40.256474    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:59:40.262949    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:59:40.275229    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:59:40.296929    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:59:40.338581    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004099964s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-859897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1123 09:59:40.420522    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:59:40.582105    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9gp7g" [8f426f13-a224-46ac-990a-35856e302371] Running
E1123 09:59:40.903498    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004267536s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-696492 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-859897 --alsologtostderr -v=3
E1123 09:59:41.545178    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-859897 --alsologtostderr -v=3: (1.312334217s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-859897 -n newest-cni-859897
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-859897 -n newest-cni-859897: exit status 7 (88.053308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-859897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-859897 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1123 09:59:42.826744    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:59:45.388739    7109 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/kindnet-676928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-859897 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (10.68007883s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-859897 -n newest-cni-859897
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-696492 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-696492 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492: exit status 2 (346.197858ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492: exit status 2 (353.413464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-696492 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-696492 -n default-k8s-diff-port-696492
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-859897 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-859897 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-859897 -n newest-cni-859897
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-859897 -n newest-cni-859897: exit status 2 (366.672541ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-859897 -n newest-cni-859897
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-859897 -n newest-cni-859897: exit status 2 (357.743313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-859897 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-859897 -n newest-cni-859897
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-859897 -n newest-cni-859897
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-676928 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-676928" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-3552/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:51:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-894987
contexts:
- context:
cluster: pause-894987
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:51:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-894987
name: pause-894987
current-context: pause-894987
kind: Config
users:
- name: pause-894987
user:
client-certificate: /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/pause-894987/client.crt
client-key: /home/jenkins/minikube-integration/21968-3552/.minikube/profiles/pause-894987/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-676928

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-676928"

                                                
                                                
----------------------- debugLogs end: kubenet-676928 [took: 3.969786971s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-676928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-676928
--- SKIP: TestNetworkPlugins/group/kubenet (4.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-676928 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-676928" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-676928

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-676928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676928"

                                                
                                                
----------------------- debugLogs end: cilium-676928 [took: 6.324968904s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-676928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-676928
--- SKIP: TestNetworkPlugins/group/cilium (6.58s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-178820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-178820
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard